Publications

Jeffrey A. Friedman, “Issue-Image Tradeoffs and the Politics of Foreign Policy: How Leaders Use Foreign Policy Positions to Shape their Personal Images” World Politics, Vol. 75, No. 2 (2023), pp. 280-315 [article, supplement, data and code].

This article explains how leaders can use foreign policy issues to shape their personal images. It argues, in particular, that presidents and presidential candidates can use hawkish foreign policies to craft valuable impressions of leadership strength. This dynamic can give leaders incentives to take foreign policy positions that are more hawkish than what voters actually want. The article documents the causal foundations of this argument with a preregistered survey experiment; it presents archival evidence demonstrating that presidential candidates use unpopular foreign policies to convey attractive personal traits; and it uses observational data to show how those tradeoffs have shaped three decades of presidential voting. The article’s theory and evidence indicate that democratic responsiveness in foreign policy is not as simple as “doing what voters want.” Leaders often need to choose between satisfying voters’ policy preferences and crafting personal images that voters find appealing. Aligning foreign policy with voters’ preferences is thus easier said than done, and it is not always the best way for leaders to maximize their public standing.

Laura Resnick Samotin, Jeffrey A. Friedman, and Michael C. Horowitz, “Obstacles to Harnessing Analytic Innovation in Foreign Policy Analysis: A Case Study of Crowdsourcing in the U.S. Intelligence Community” Intelligence and National Security, Vol. 38, No. 4 (2023), pp. 558-575 [article].

We conducted interviews with national security professionals to examine why the U.S. Intelligence Community has not systematically incorporated prediction markets or prediction polls into its intelligence reporting. This behavior is surprising since crowdsourcing platforms often generate more accurate predictions than traditional forms of intelligence analysis. Our interviews suggest that three principal barriers to adopting these platforms involved (i) bureaucratic politics, (ii) decision-makers lacking interest in probability estimates, and (iii) lack of knowledge about these platforms’ ability to generate accurate predictions. Interviewees offered many actionable suggestions for addressing these challenges in future efforts to incorporate crowdsourcing platforms or other algorithmic tools into intelligence tradecraft.

Jeffrey A. Friedman, “Progressive Grand Strategy: A Synthesis and Critique” Journal of Global Security Studies, Vol. 8, No. 1 (2023), ogac032 [article].

This paper evaluates emerging progressive ideas about U.S. grand strategy. Progressives’ distinctive analytic premise is that structural inequality undermines America’s national interests. To combat this problem, progressives recommend retrenching U.S. primacy in a manner that resembles the grand strategy of restraint. But progressives also seek to build a more democratic international order that can facilitate new forms of global collective action. Progressives thus advocate ambitious international goals at the same time as they reject the institutional arrangements that the United States has traditionally used to promote its global agenda. No other grand strategy shares those attributes. After articulating the core elements of a progressive grand strategy, the paper explores that strategy’s unique risks and tradeoffs and raises several concerns about the theoretical and practical viability of progressive ideas.

Jeffrey A. Friedman, “Is U.S. Grand Strategy Dead? The Political Foundations of Deep Engagement after Donald Trump” International Affairs, Vol. 98, No. 4 (2022), pp. 1289-1305 [article].

International relations scholars frequently warn that the American political system has become too fractured to sustain a coherent grand strategy. This perception generally rests on two premises: that President Donald Trump led an unprecedented assault on established principles of U.S. foreign policy, and that Democrats and Republicans have become so polarized that they can no longer agree on a common vision for global leadership. By contrast, this paper argues that the grand strategy of deep engagement retains robust bipartisan support. Even though President Trump rejected more expansive conceptions of liberal internationalism, his behavior was largely consistent with deep engagement’s principles. Moreover, when Trump departed from deep engagement – as with questioning the U.S. commitment to NATO – his actions did not reflect voters’ policy preferences. In fact, polling data indicate that public support for deep engagement is at least as strong today as it has been at any other point since the end of the Cold War. Altogether, the paper thus demonstrates that the grand strategy of deep engagement is less embattled, and more politically viable, than the conventional wisdom suggests.

Jeffrey A. Friedman, War and Chance: Assessing Probability in International Politics (New York: Oxford University Press, 2019). Published through Oxford’s “Bridging the Gap” series of policy-relevant scholarship [introduction, Amazon].

War and Chance shows how foreign policy officials often avoid assessing uncertainty and argues that this behavior undermines high-stakes decision making. Pushing back against the widespread idea that assessments of uncertainty in international politics are too subjective to be useful, the book explains how foreign policy analysts can form these judgments in a manner that is theoretically coherent, empirically meaningful, politically defensible, practically valuable, and sometimes logically necessary for making sound choices. Each of these claims contradicts widespread skepticism about the value of probabilistic reasoning in foreign policy analysis, and shows that placing greater emphasis on assessing uncertainty can improve nearly any foreign policy debate. The book substantiates these claims by examining critical episodes in the history of U.S. national security policy and by drawing on a diverse range of quantitative evidence, including a database that contains nearly one million geopolitical forecasts and experimental studies involving hundreds of national security professionals
-Winner, 2020 Peter Katzenstein Book Prize for best first book in international relations, comparative politics, or political economy.   Perspectives on Politics: “The best book on improving decision making through rigorous empirical analysis since Philip Tetlock’s landmark Expert Political Judgment.”

Jeffrey A. Friedman, “Priorities for Preventive Action: Explaining Americans’ Divergent Reactions to 100 Public Risks” American Journal of Political Science, Vol. 63, No. 1 (2019), pp. 181-196 [paper, supplement, replication materials]

Why do Americans’ priorities for combating risks like terrorism, climate change, and violent crime often seem so uncorrelated with the danger those risks objectively present? Many scholars believe the answer to this question is that heuristics, biases, and ignorance cause voters to misperceive risk magnitudes. By contrast, this paper argues that Americans’ risk priorities primarily reflect judgments about the extent to which some victims deserve more protection than others and the degree to which it is appropriate for government to intervene in different areas of social life. The paper supports this argument with evidence drawn from a survey of 3,000 Americans, using pairwise comparisons to understand how respondents perceive nine dimensions of 100 life-threatening risks. Respondents were well-informed about these risks’ relative magnitudes – the correlation between perceived and actual mortality was 0.83 – but those perceptions explained relatively little variation in policy preferences relative to judgments about the status of victims and the appropriate role of government. These findings hold regardless of political party, education, and other demographics. The paper thus argues that the key to understanding Americans’ divergent reactions to risk lies more with their values than with their grasp of factual information.

Jeffrey A. Friedman and Richard Zeckhauser, “Analytic Confidence in Political Decision Making: Theoretical Principles and Experimental Evidence from National Security Professionals Political Psychology, Vol. 39, No. 5 (2018), pp. 1069-1087 [paper, supplement]

When making decisions under uncertainty, it is important to distinguish between the probability that a judgment is true and the confidence analysts possess in drawing their conclusions. Yet analysts and decision makers often struggle to define “confidence” in this context, and many ways that scholars use this term do not necessarily facilitate decision making under uncertainty. To help resolve this confusion, we argue for disaggregating analytic confidence into three dimensions: reliability of available evidence, range of reasonable opinion, and responsiveness to new information. After explaining how these attributes hold different implications for decision making in principle, we present survey experiments examining how analysts and decision makers employ these ideas in practice. Our first experiment found that each conception of confidence distinctively influenced national security professionals’ evaluations of high-stakes decisions. Our second experiment showed that inexperienced assessors of uncertainty could consistently discriminate among our conceptions of confidence when making political forecasts. We focus on national security, where debates about defining “confidence levels” have clear practical implications. But our theoretical framework generalizes to nearly any area of political decision making, and our empirical results provide encouraging evidence that analysts and decision makers can engage these abstract elements of uncertainty.

Jeffrey A. Friedman, Joshua Baker, Barbara Mellers, Philip Tetlock, and Richard Zeckhauser, “The Value of Precision in Probability Assessment: Evidence from Large-Scale Geopolitical Forecasting Tournament” International Studies Quarterly, Vol. 62, No. 2 (2018), pp. 410-422 [paper, supplement]

This article employs a unique data set containing 888,328 geopolitical forecasts to examine the extent to which analytic precision improves the predictive value of foreign policy analysis. Scholars, practitioners, and pundits often prefer to leave their assessments of uncertainty vague when debating foreign policy, on the grounds that clearer probability estimates would provide arbitrary detail instead of useful insight. However, we find that coarsening numeric probability assessments in a manner consistent with common qualitative expressions – including expressions currently recommended for use by intelligence analysts – consistently sacrifices predictive accuracy. This result does not depend on extreme probability estimates, short time horizons, particular scoring rules, or individual-level attributes that are difficult to cultivate. At a practical level, our analysis indicates that it would be possible to make foreign policy discourse more informative by supplementing natural language-based descriptions of uncertainty with quantitative probability estimates. Most broadly, our findings advance long-standing debates over the limits of subjective judgment when assessing social phenomena, showing how explicit probability assessments are empirically justifiable even in domains featuring as much complexity as world politics.

Jeffrey A. Friedman, Jennifer S. Lerner, and Richard Zeckhauser, “Behavioral Consequences of Probabilistic Precision: Experimental Evidence from National Security Professionals” International Organization, Vol. 71, No. 4 (2017), pp. 803-826  [paper, supplement]

National security is one of many fields where experts make vague probability assessments when evaluating high-stakes decisions. This practice has always been controversial, and it is often justified on the grounds that making probability assessments too precise could bias analysts or decision makers. Yet these claims have rarely been submitted to rigorous testing. In this paper, we specify behavioral concerns about probabilistic precision into falsifiable hypotheses which we evaluate through survey experiments involving national security professionals. Contrary to conventional wisdom, we find that decision makers responding to quantitative probability assessments are less willing to support risky actions and more receptive to gathering additional information. Yet we also find that when respondents estimate probabilities themselves, quantification magnifies overconfidence, particularly among low-performing assessors. These results hone wide-ranging concerns about probabilistic precision into a specific and previously undocumented bias which training may be able to correct.

Jeffrey A. Friedman and Richard Zeckhauser, “Why Assessing Estimative Accuracy is Feasible and Desirable” Intelligence and National Security, Vol. 34, No. 2 (2016), pp. 178-200  [paper]

The US Intelligence Community (IC) has been heavily criticized for making inaccurate estimates. Many scholars and officials believe that these criticisms reflect inappropriate generalizations from a handful of cases, thus producing undue cynicism about the IC’s capabilities. Yet there is currently no way to evaluate this claim, because the IC does not systematically assess the accuracy of its estimates. Many scholars and practitioners justify this state of affairs by claiming that assessing estimative accuracy would be impossible, unwise, or both. This article shows how those arguments are generally unfounded. Assessing estimative accuracy is feasible and desirable. This would not require altering existing tradecraft and it would address several political and institutional problems that the IC faces today.

Jeffrey A. Friedman, “Using Power Laws to Estimate Conflict Size” Journal of Conflict Resolution, Vol. 59, No. 7 (2015), pp. 1216-1241  [paperdata and code]

Casualty counts are often controversial, and thorough research can only go so far in resolving such debates – there will almost always be missing data, and thus a need to draw inferences about how comprehensively violence has been recorded. This paper addresses that challenge by developing an estimation strategy based on the observation that violent events are often distributed according to power laws, a pattern which structures our expectations about what event data on armed conflict would look like if those data were complete. After validating this technique with respect to reported U.S. casualties in Iraq, Vietnam, and Korea, the paper employs the estimation strategy to estimate the scale of the American Indian Wars.

Jeffrey A. Friedman and Richard Zeckhauser, “Handling and Mishandling Estimative Probability: Likelihood, Confidence, and the Search for Bin Laden” Intelligence and National Security, Vol. 30, No. 1 (2015), pp. 77-99 [paper, website]

In a series of reports and meetings in Spring 2011, intelligence analysts and officials debated the chances that Osama bin Laden was living in Abbottabad, Pakistan. Estimates ranged from a low of 30 or 40 percent to a high of 95 percent. The president stated that he found this discussion confusing, even misleading. Motivated by that experience, and by broader debates about intelligence analysis, this article examines the conceptual foundations of expressing and interpreting estimative probability. It explains why a range of probabilities can always be condensed into a single point estimate that is clearer (but logically no different) than standard intelligence reporting, and why assessments of confidence are most useful when they indicate the extent to which estimative probabilities might shift in response to newly gathered information.
Featured on NPR’s “All Things Considered,” July 23, 2014 [interview]

Stephen Biddle, Jeffrey A. Friedman, and Jacob N. Shapiro, “Testing the Surge: Why Did Violence Decline in Iraq in 2007?” International Security, Vol. 37, No. 1 (2012), pp. 7-40 [paperwebsitesupplement, replication files]

Combines recently-declassified data on local-level violence in Iraq with information gathered from an original series of 70 structured interviews with Coalition officers in order to test why violence declined in Iraq in 2007. Through both quantitative and qualitative analyses, the article argues that this process was driven by an interaction between the Surge and the Sunni Awakening: both were necessary but neither alone was sufficient, whereas other explanations (including the dynamics of sectarian cleansing) cannot account for local or national violence trends. An important implication is that while U.S. policy deserves partial credit for reducing Iraq’s violence, similar methods cannot be expected to work elsewhere without local equivalents of the Sunni Awakening.
-See also Biddle, Friedman, and Shapiro, “Correspondence: Assessing the Synergy Thesis,” International Security, Vol. 37, No. 4 (2013), pp. 173-198 [website]; H-Diplo/ISSF article review 2013:4 [website]; also reprinted in The New Counterinsurgency Era in Critical Perspective, ed. Celeste Ward Gventer, David Martin Jones, and M.L.R. Smith (NY: Palgrave Macmillan, 2014) [website].

Jeffrey A. Friedman and Richard Zeckhauser, “Assessing Uncertainty in Intelligence” Intelligence and National Security, Vol. 27, No. 6 (2012), pp. 824-847 [paperwebsitedata]

Applies insights from decision theory to critique current U.S. intelligence methods (or “tradecraft”). Argues that the goal of estimative intelligence should be to assess uncertainty, and yet many existing tradecraft methods are instead designed to reduce (and ideally to eliminate) uncertainty in a fashion that can impair the accuracy, clarity, and utility of intelligence products. The article is based on a review of prominent tradecraft manuals, interviews with intelligence analysts and officials, and empirical analysis spanning 379 declassified National Intelligence Estimates.

Stephen Biddle, Jeffrey A. Friedman, and Stephen Long, “Civil War Intervention and the Problem of Iraq” International Studies Quarterly, Vol. 56, No. 1 (2012), pp. 85-98 [paperwebsitedata]

Since at least 2006, much of the debate over U.S. policy in Iraq has turned on different assessments of the prospective danger of foreign intervention following U.S. withdrawal. This paper systematically assesses that risk via a two-stage analysis: first, by using dyad-year data to assess the way specific factors central to the Iraq debate correlate with the incidence of civil war intervention more broadly; and second, by leveraging this empirical model through a Monte Carlo simulation to predict the likelihood that Iraq’s neighbors might intervene in potentially renewed civil violence.

Jeffrey A. Friedman, “Manpower and Counterinsurgency: Empirical Foundations for Theory and Doctrine” Security Studies, Vol. 20, No. 4 (2011), pp. 556-591 [paperwebsitedata]

Examines the relationship between force size and counterinsurgency outcomes using new data on 171 cases since World War I. These data allow for the first systematic, cross-sectional analysis of several prominent claims about force sizing in counterinsurgency, including the often-cited rule of thumb in official U.S. military doctrine that successful counterinsurgents require twenty troops per thousand people in the area of operations. The data do not support this claim and there do not appear to be any reliable “thresholds” for force sizing. Troop density is positively related to counterinsurgent success, but the relationship is not particularly strong. This pattern holds for a diverse range of subsets within the data, and it does not appear to be driven by strategic selection.

Stephen Biddle and Jeffrey A. Friedman, The 2006 Lebanon Campaign and the Future of Warfare: Implications for Army and Defense Policy (Carlisle, Penn.: U.S. Army War College, 2008) [website]

The 2006 conflict in Lebanon generated a high-profile debate about the sources of Hezbollah’s military effectiveness against Israel, and what this implies for the future of conflict with non-state actors. Some argue that Hezbollah waged an especially effective form of asymmetric warfare; others argue that the group demonstrated a non-state actor’s ability to fight in remarkably conventional ways. This monograph addresses that debate by way of original structured interviews with 36 IDF officers, providing a systematic assessment of Hezbollah’s military behavior in 2006. The evidence shows that Hezbollah combined attributes of “conventional” and “irregular” militaries in a manner that defies standard conceptual dichotomies and previous assessments of the case. Rather than following the typical distinction between states and non-states, the monograph argues that both types of actors should be viewed on a common theoretical spectrum of brute force versus coercion.
-Excerpted in Hybrid Warfare and Transnational Threats: Perspectives for an Era of Persistent Conflict, eds. Paul Brister, William H. Natter III, and Robert Tomes (Washington, D.C.: Center for Emerging National Security Affairs, 2011)   [link]

_______________________________________________________________

 

Leave a Reply

Your email address will not be published. Required fields are marked *