“A study published last September in the Proceedings of the National Academy of Sciences challenges the notion that a substantial minority of Americans—more than two-fifths, according to some reports—condone political violence. The Dartmouth political scientist Sean Westwood and his co-authors argue that “documented support for political violence is illusory, a product of ambiguous questions, conflated definitions, and disengaged respondents.”
Westwood et al. acknowledge that partisan animosity, a.k.a. “affective polarization,” has “increased significantly” during the last few decades. “While Americans are arguably no more ideologically polarized than in the recent past,” they say, “they hold more negative views toward the political opposition and more positive views toward members of their own party.” But at the same time, “evidence suggests that affective polarization is not related to and does not cause increases in support for political violence and is generally unrelated to political outcomes.” So what are we to make of claims that more than a third of Americans believe political violence is justified?
“Despite media attention,” Westwood et al. note, “political violence is rare, amounting to a little more than 1% of violent hate crimes in the United States.” They argue that “self-reported attitudes on political violence are biased upwards because of disengaged respondents, differing interpretations about questions relating to political violence, and personal dispositions towards violence that are unrelated to politics.”
Westwood et al. estimate that, “depending on how the question is asked, existing estimates of support for partisan violence are 30-900% too large.” In their study, “nearly all respondents support[ed] charging suspects who commit acts of political violence with a crime.” These findings, they say, “suggest that although recent acts of political violence dominate the news, they do not portend a new era of violent conflict.”
These conclusions are based on three surveys in which Westwood et al. presented respondents with specific scenarios involving different kinds of violence, varying in severity and motivation. “Ambiguous survey questions cause overestimates of support for violence,” they write. “Prior studies ask about general support for violence without offering context, leaving the respondent to infer what ‘violence’ means.” They also note that “prior work fails to distinguish between support for violence generally and support for political violence,” which “makes it seem like political violence is novel and unique.”
A third problem they identify is that “prior survey questions force respondents to select a response without providing a neutral midpoint or a ‘don’t know’ option,” which “causes disengaged respondents…to select an arbitrary or random response.” Since “current violence-support scales are coded such that four of five choices indicate acceptance of violence,” those arbitrary or random responses tend to “overstate support for violence.”
What happens when researchers try to address those weaknesses? In all three surveys that Westwood et al. conducted, “respondents overwhelmingly reject[ed] both political and non-political violence.” And while a substantial minority disagreed, that number was inflated by respondents who were classified as “disengaged” based on their failure to retain information from the brief scenarios they read.”