Minipublics are small-scale forums that engage representative bodies of randomly selected citizens in structured, informed, and facilitated deliberation about issues of public interest (Fung, 2003; Goodin & Dryzek, 2006; Grönlund et al., 2014; Ryan & G. Smith, 2014). Though different types have distinct functions (Setälä & G. Smith, 2018), the minipublics to which we refer herein are those that seek to ‘shape decision-making indirectly by inserting their recommendations into the citizenry’s public deliberations’ (Lafont, 2015, p. 56; Niemeyer, 2011). Such minipublics often produce reports that include policy-relevant factual information, pro and con arguments, and justifications for those arguments. These minipublics are meant to serve as ‘trusted information proxies’ for citizens seeking to form opinions or make decisions about public policies (Bächtiger, Setälä & Grönlund 2014; MacKenzie & Warren, 2012; Niemeyer, 2014; Warren & Gastil, 2015).

This form of minipublic is designed to induce a broader public deliberation (Caluwaerts & Reuchamps, 2015; Himmelroos, 2017). By inserting information into a deliberative system (Hendriks, 2006; Parkinson & Mansbridge, 2012), a minipublic could achieve two systemic goals. First, it could engender more considered public opinion (Chambers, 2009; Hauser, 2007; Lafont, 2015; Yankelovich, 1991). Second, it could increase a system’s deliberative capacity by stimulating authentic, inclusive, empowered, and consequential deliberation (Dryzek, 2009; Felicetti, Niemayer and Curato, 2016; Milewicz & Goodin, 2018). From a systemic perspective, a minipublic’s quality hinges on not only the discursive quality of its deliberation but also on its ability to insert information into the larger deliberative system (Curato & Böker, 2016; Niemeyer & Jennstål, 2018; Olsen & Trenz, 2014).

The Citizens’ Initiative Review (CIR) is one such minipublic that endeavors to help voters in the state of Oregon (Gastil & Knobloch, 2020). A deliberative forum authorized by the Oregon state government and run by a separate neutral convening organization, Healthy Democracy, the CIR convenes 20–24 randomly selected citizens to deliberate on a proposed ballot measure slated to appear on Oregon voters’ ballots in an upcoming statewide election. After four-to-five days of deliberation, these citizen panelists author a one-page analysis of the measure. The Secretary of State then places this “Citizens’ Statement” in the official Voters’ Pamphlet mailed to all registered voters.

The inclusion of the printed Statement in the mass-mailed Pamphlet renders it a form of mass media. It is a message transmitted through a ‘communication channel used to simultaneously reach a large number of people’ (Wimmer & Dominick, 2013, p. 2; e.g., see Pfau et al., 1990). More precisely, the Statement is a form of ‘deliberative media,’ which are mass media that not only reach large numbers of citizens but also provide information on issues of public interest while aiming to inspire deliberative behaviors by their audience (Carcasson & Sprain, 2010; Wessler, 2008a, 2008b). Deliberative media share the same systemic influence goals as minipublics, which is why minipublics like the CIR use them (Dryzek et al., 2009; Felicetti, Niemayer & Curato, 2016; Ingham & Levin, 2018a, 2018b; Smith, 2012; Warren & Pearse, 2008).

Prior research on deliberative media has assessed the extent to which their content reflects normative standards (Wessler, 2008a, 2008b) and, largely, has been limited to non-experimental content analyses (Rinke et al., 2013; Wessler & Rinke, 2014; for a notable exception, see van der Wurff, De Swert & Lecheler, 2018). However, echoing Curato and Böker (2016), the deliberative quality of these media also depends on their external reception. This suggests the need for an experimental approach in which voters are exposed to different media.

Research has shown the CIR to be of high deliberative quality (Knobloch et al., 2013, 2014). Past studies have found the Citizens’ Statement can inform voters’ opinions and augment different aspects of their deliberative capacity (Gastil et al., 2014, 2015, 2016; Gastil & Knobloch, 2010; Knobloch et al., 2014; Knobloch, Barthel and Gastil, 2020; Már & Gastil, 2020). The Statements also function as a ‘deliberative cue’ to offset some of the more detrimental effects of overreliance on partisan cues (Gastil et al., 2018; Már & Gastil, 2020).

Deliberative media like the Statement, however, can only reliably inform opinions to the extent citizens perceive them to be unbiased and credible (Morris, 2007; Rahman, 2014).

Decreasing public trust in mass media (Rainie, Keeter and Perrin, 2019; Swift, 2016) and increasing hyper-partisanship (Pew Research Center, 2017) may undermine these perceptions and reduce the deliberative efficacy of a minipublic like the CIR. Though Már and Gastil (2020) found the Statement can inform voters’ opinions across partisan identities and prior attitudes, no prior work has considered how voters’ distrust of media may affect perceptions of the Statement’s bias.

To fill this research gap, we evaluated the deliberative quality of two 2014 CIRs through a secondary analysis of voters’ perceptions of two 2014 Citizens’ Statements. We examined whether these perceptions demonstrate the occurrence of hostile media perceptions (HMP) as a result of reading the Statement.

First described by Vallone, Ross, and Lepper (1985), HMP are a phenomenon wherein readers perceive mass media to be biased against their point of view and less credible (Arpan & Raney, 2003; Perloff, 1989). Whereas HMP research, traditionally, has been limited to studies of perceptions of conventional, professionally-sourced news media (Giner-Sorolla & Chaiken, 1994; Gunther & Liebhart, 2006; Gunther & Schmitt, 2004), recent studies have since widened the scope to study citizens’ perceptions of mass media created or shared by their peers (Ardèvol-Abreu & Gil De Zúñiga, 2017; Arlt & Wolling, 2016; Borah, Thorson and Hwang, 2015; Carr et al., 2014; Chung et al., 2015; Gearhart, Moe and Zhang, 2020; Herbert & Hansen, 2018; Hopke et al., 2010; Houston, Hansen and Nisbett 2011; Johnson et al., 2007; Kim, 2015; Lee, 2012; Lee, Kim and Coe, 2018; Lin, Haridakis & Hanson, 2016; Nix & Pickett, 2017; Shin & Thorson, 2017; Yun, Park and Lee, 2016; Yun et al. 2018). Research has also shown the increasing influence of peer-shared media on public opinion formation (Smith, 2009).

We aim to enrich this literature by studying whether HMP occur for the CIR’s Citizens’ Statement. Unlike conventional news, this unique message to voters is written by citizens as a product of their deliberation, then gets transmitted to their peers by their government. If HMP occurs, voters will perceive the Statement as biased against their preexisting preferences, negatively impacting its perceived credibility and reducing the CIR’s deliberative impact as a result.

We also use an HMP approach to test other predictions from deliberative theory. Prior work suggests that voters who understand a minipublic’s design and purpose would view its findings as credible (Cutler et al., 2008; Fournier et al., 2011; Gastil et al., 2016; Niessen, 2019; Warren & Gastil, 2015; for a notable exception, see Devillers et al., 2020). Voters with more faith in the efficacy of public deliberation should give more credence to messages generated by deliberative bodies (Fung, 2005; White, 2010). Such beliefs can be strengthened by direct participation in deliberation (Brinker, Gastil and Richards, 2015; Gastil et al., 2010; Knobloch & Gastil, 2015), but no prior work has considered how they might shape perceptions of deliberative outputs. Using the HMP model, we test the extent to which these factors shape the credibility of the CIR’s Statements.

Research Context

Our unique research setting warrants a fuller description of the CIR before framing it in the terms of HMP theory. In 2009, the Oregon state legislature authorized the pilot test of a new method to improve direct democracy in statewide elections. Every two years in Oregon, citizens gather signatures to place initiatives on the ballot, which ask voters to approve or reject proposed laws on any subject, like tax policy or environmental regulation. The legislative intent of the CIR was to provide Oregon voters with impartial information on these statewide ballot measures by giving them a page of information written by their peers.

The inaugural CIR, held in 2010, gathered a stratified random sample of 24 Oregon voters, all of whom had expenses covered in addition to a modest honorarium. Participants then spent five days studying a proposed criminal sentencing law. These citizen panelists heard testimony from advocates and opponents of the proposal, engaged in question-and-answer sessions with background experts, and enjoyed ample time for small-group deliberation. In the final two days of the process, panelists wrote their one-page Citizens’ Statement. This began with a summary of ‘Key Findings’ about the proposed law, which consisted of neutral information about the law that the panelists considered important for voters to know. Separate sections presented what the panelists considered to be the strongest arguments for and against passing the law. Before the 2010 general election, the Oregon Secretary of State placed this Citizens’ Statement in the Voters’ Pamphlet, where it appeared after a technical summary and fiscal impact statement for the measure. Following the first round of CIRs, in 2011 the Oregon legislature created the CIR Commission to make this a regular part of the electoral process. Since its inception, the CIR has been held seven times in Oregon, eight times in other US states, and most recently in Finland and Switzerland (Gastil and Knobloch, 2020).

The CIR feeds into the stream of deliberative theory and practice in many ways. Modeled on the Citizens’ Jury first created in the 1970s (Crosby & Nethercutt, 2005), it resembles many other small deliberative bodies tasked with analyzing a policy proposal and drafting an issue summary and/or recommendation. It also fits within the larger set of minipublics (Grönlund, Bächtiger & Setälä, 2014), which convene random samples of citizens to deliberate on one or more issues, then report their findings to elected officials, agencies, or the general public (Farrell & Suiter, 2019; Fishkin, 2018). What distinguishes the CIR is its legal authority to provide a timely report to an electorate via its Citizens’ Statements. In this sense, the CIR aims to use its own internal deliberation as a way to make the public a bit more deliberative on a mass scale (MacKenzie & Warren, 2012; Warren & Gastil, 2015).

The two instances of the CIR used in this study were convened in Salem, Oregon in August 2014 to provide statements on ballot measures appearing in the November general election. Measure 90 (August 17–20) proposed replacing the conventional party primary system with a ‘top-two’ primary, in which the two candidates with the most votes advance to the general election, regardless of their political party. Measure 92 (August 21–24) proposed mandating that certain foods including genetically modified organisms (GMOs) be labeled as such. Both measures ultimately failed: 68% of voters opposed Measure 90, as did 50.5% of those voting on Measure 92. (These and other details on the measures are available at Ballotpedia.com.)

Because Oregon conducts its elections by mail, every registered Oregon voter was sent both a printed ballot and the official Voters’ Pamphlet several weeks before Election Day. For both Measures 90 and 92, the centerpiece of the 2014 Oregon Citizens’ Statement was a bulleted list of Key Findings. Below that were two boxes: On the left was the ‘Citizen Statement in Support of the Measure’ (pro arguments), and on the right stood a ‘Citizen Statement in Opposition to the Measure’ (con arguments). Above these were two descriptive paragraphs to provide context. The first, titled ‘Citizens’ Initiative Review of Ballot Measure 92,’ explained that ‘the opinions expressed in this statement are those of the members of a citizen panel and were developed through the Citizens’ Initiative Review process as adopted by the Oregon State Legislature.’ It further specified that these views are ‘NOT official opinions or positions endorsed by the State of Oregon or any government agency,’ nor does the CIR serve as ‘a judge of the constitutionality or legality of any ballot measure.’ Below that was a ‘Description of the Citizens’ Initiative Review,’ which offered this description of the CIR process: ‘This statement was developed by an independent panel of 20 Oregon voters, chosen at random from the voting population of Oregon, and balanced to fairly reflect the state’s voting population. The panel has issued this statement after three and a half days of hearings and deliberation. This statement has not been edited nor has the content been altered.’

Theory and Hypotheses

Our core hypotheses derive from HMP theory. We extend that theory with additional hypotheses about how knowledge and attitudes toward minipublics might affect perceptions of deliberative media. We present each of these hypotheses in relation to the Oregon CIR.

Hostile Media Perceptions and the Citizens’ Statement

Gunther and Schmitt (2004) summarize the core proposition of HMP theory as ‘the tendency for partisans to judge mass media coverage as unfavorable to their own point of view’ and favorable to opposing views (p. 55). The central claim of HMP is that people with preexisting views on an issue will perceive neutral or balanced mass media content about that issue as biased against their views. Considerable research has advanced HMP theory since its inception (Vallone, 1985; for a review, see Perloff, 2015). Studies have examined the nature and limits of this phenomenon on various groups’ perceptions of different mass media and produced robust evidence in support of the theory (Arceneaux, Johnson and Murphy, 2012; Ariyanto, Hornsey & Gallois, 2007; Arpan & Raney, 2003; Choi, Yang and Chang, 2009; Christen, Kannaovakun & Gunther, A. 2002; Eveland & Shah, 2003; Feldman, 2011; Giner-Sorolla & Chaiken, 1994; Gunther & Chia, 2001; Gunther & Liebhart, 2006; Gunther & Schmitt, 2004; Hwang, Pan and Sun, 2008; Matheson & Dursun, 2001; Moehler & Singh, 2011; Morris, 2007; Perloff, 1989; Richardson et al., 2008; Schmitt, Gunther and Liebhart, 2004; Tsfati, 2007; Tsfati & J. Cohen, 2005; for a meta-analysis, see Hansen & Kim, 2011).

The first study to test the HMP perspective compared Arab and Israeli students’ perceptions of network television news broadcasts on Arab-Israeli conflict in the Middle East (Vallone, Ross and Lepper, 1985). Each group saw the programs as biased against their side and in favor of the other. Conversely, students with ‘generally mixed’ or neutral feelings about the conflict perceived the broadcasts as neutral and fair. Studies conducted since have replicated these findings and refined HMP theory (Giner-Sorolla & Chaiken, 1994; Perloff, 1989). Later studies have argued that hostile perceptions are a function of people’s involvement in an issue (Christen, Kannaovakun & Gunther, 2002; Gunther, 1992) or prior attitudes about that issue (Arpan & Raney, 2003), rather than simply the result of partisan identities or ideological commitments.

There may also be other cases wherein people on either side of an issue perceive mass media coverage as unequivocally biased against one side and in favor of the other, albeit to different degrees depending on the direction of that bias. To account for these cases, Gunther and Chia (2001) theorized a ‘relative’ version of hostile media perceptions (RHMP) to explain cases in which people on both sides of an issue exposed to mass media coverage will agree on the direction but disagree on the magnitude of its bias. RHMP theory describes cases wherein each group perceives a mass-mediated message ‘to be either more hostile to, or at least less agreeable with, their own point of view than the opposing group sees it’ (2001, p. 690). As a result, partisans perceive less bias in mass media ‘slanted to support their view than their opponents on the other side of the issue’ (Feldman, 2011, p. 411; see also Arceneaux et al., 2012; Gunther, Miller and Liebhart, 2009). Research has shown robust support for RHMP (Arceneaux & M. Johnson, 2015, Arceneaux et al., 2012; Coe et al., 2008; Feldman, 2011; Gunther & Chia, 2001; Gunther & Christen, 2002; Gunther et al., 2001).

In addition, Matheson and Dursun (2001) found that HMP can extend to people’s perceptions of argument strength. They exposed Serb and Muslim immigrants from Bosnia to media reports covering contemporaneous conflict between these factions in the country from which they emigrated. These reports were also balanced, in that they contained pro and con arguments for each side of the issue. Thus, Matheson and Dursun (2001) found that each group perceived not only the reports and editors as biased against their side, but each group also rated the arguments presented for their side as weaker than the other side’s.

In sum, an HMP approach expects that people with strong preexisting attitudes on an issue will perceive mass media about that issue as biased against their preexisting views and, in turn, less credible (Arpan & Raney, 2003; Johnson & Kaye, 1998). In addition, HMP predicts that these people will perceive media arguments that support their preexisting positions as weaker and arguments in these media that oppose their preexisting positions as stronger (Arpan & Raney, 2003; Matheson & Dursun, 2001).

Applying the HMP approach to our study of the CIR, recall first that the Citizens’ Statement consists of three main parts. It begins with Key Findings on the measure, which are meant to consist of neutral information about it. That is followed by separate sections for the pro and con arguments on the measure, which balance ‘biased’ arguments from each side on the issue. The HMP approach predicts that voters’ preexisting views on the ballot measure will cause them to diverge in their perceptions of the strength of the Statement’s pro and con arguments. A voter in favor of the measure will perceive the pro arguments as weaker and its con arguments as stronger and vice versa for a voter opposing the measure. Voters will also perceive the Statement’s Key Findings as more biased against their preexisting views and in favor of the opposite position on the measure.

H1: Relative to voters with no preexisting position on a ballot measure, voters supporting or opposing that measure will perceive (a) a Citizen Statement’s pro or con arguments consistent with their preexisting views as weaker, (b) the pro or con arguments opposite from their views as stronger, and (c) Key Findings as more biased against their preexisting views and less credible.

Confidence in Knowledge of the CIR

Another factor that can affect the impact of hostile media perceptions is how a person perceives the message’s source (Gunther & Liebhart, 2006). A message from a source considered part of a person’s ingroup dampens the HMP effect as compared to a source representing an outgroup (Cohen, 2003; Duck, Terry and Hogg, 1998; Hartmann & Tanis, 2013; Reid, 2012). Other studies suggest that the HMP effect is reduced when the source of the message is perceived to be less biased in general (Choi et al., 2009; Giner-Sorolla & Chaiken, 1994; Gunther & Liebhart, 2006). Both explanations imply an element of trust when it comes to how a message source shapes bias and credibility perceptions (Tsfati & Cohen, 2005, 2013). When people trust its source, a message’s content can appear less biased and more credible (Giffin, 1967; Gunther et al., 2009; Hovland & Weiss, 1951; T. J. Johnson & Kaye, 1998; Kiousis, 2001; Metzger & Flanagin, 2013; Pornpitakpan, 2004).

In addition, mass media created by governments are particularly susceptible to the HMP effect due to citizens’ inherent distrust of governments (Ceron & Memoli, 2015; Chia et al., 2007; Moehler & Singh, 2011; Tsfati & J. Cohen, 2005). Skepticism toward official sources reduces perceptions of these media as unbiased and credible (T. J. Johnson & Kaye, 2004; T.-T. Lee, 2010). Put in the context of the CIR, this has implications for perceptions of the Citizens’ Statement. Recall that the Statement is written by a small group of randomly selected citizen panelists but distributed by the Oregon Secretary of State in the official Voters’ Pamphlet. As such, the Statement is a distinctive kind of deliberative media—one authorized by state government, produced by citizens through a deliberative forum, overseen by an independent organization (Healthy Democracy), and transmitted through a mass-mailed government document.

How might citizens perceive such a source and its message? Research on the British Columbia Citizens’ Assembly suggests that voters who are more aware of the Statement’s deliberative origins will consider it less biased and more credible (Cutler et al., 2008; Fournier et al., 2011). Nevertheless, if the Statement does not provide enough information about its origins, voters may erroneously construe it as coming directly from public officials—and, therefore, as more biased and less credible. In addition, voters’ knowledge of other features of the CIR’s design (e.g., how its panelists were selected, how its deliberative process was conducted, and who sponsored and organized the CIR) may also affect their perceptions of the Statement (Boulianne, 2018; Gastil et al., 2016; Niessen, 2019).

Thus, we predict that voters who think the Citizens’ Statement provided enough information about the CIR’s design should feel more confident in their assessment of this minipublic. Those with more confident assessments will perceive the CIR’s pro and con arguments as stronger. They will also view its Key Findings as less biased and more credible.

H2: Relative to voters with less confidence in their knowledge of the CIR, voters with more confidence in that knowledge will perceive (a) both the Statement’s pro and con arguments as stronger and (b) its Key Findings as less biased and more credible.

Faith in Deliberation

Research on minipublics has shown they can render considered judgments and inform citizens’ opinions and voting intentions (Boulianne, 2018; Cutler et al., 2008; Fishkin & Luskin, 2005; Fournier et al., 2011; Gastil et al., 2016, 2018; Ingham & Levin, 2018a, 2018b; Luskin, Fishkin and Jowell, 2002; Már & Gastil, 2020), as they may represent ‘trusted information proxies’ citizens can utilize in forming those opinions prior to voting (MacKenzie & Warren, 2012; Warren & Gastil, 2015). However, citizens will only utilize minipublics in this manner to the extent that their deliberative media are perceived to be unbiased and credible. Deliberative theory also suggests that perceiving these media as deliberative ‘requires a certain faith’ in the capacity for public deliberation to render sound judgment (Fung, 2005, p. 401; see also White, 2010). Fung (2005, p. 401) describes this ‘faith in deliberation’ as having two components:

The first component…is that deliberation can produce good results not only under circumstances of perfect equality and deep mutual deliberative commitment but also under more realistic conditions. Its second component is that circumstances that are hostile to deliberation can sometimes be made more congenial.

Fung (2005, p. 406) suggests having faith in deliberation requires believing not only that it will ‘generate superior social choices compared with other methods of making decisions’ but also that it can be realized in reality, and ‘improved [ideally] by increasing the quality of information or the [deliberative] capacities of participants.’

Put simply, faith in deliberation rests on a belief that it is both desirable and feasible as a way of coming to considered judgments. In the case of the Citizens’ Statement, this means that voters will consider the CIR a ‘trusted information proxy’ to the extent that they consider deliberation a sound method for making judgments about ballot measures. This also speaks to what makes faith in deliberation distinct from confidence in one’s knowledge of the CIR. Whereas confidence comes from the amount of information provided about the CIR, faith is a deeper conviction about the virtue of deliberation independent of one’s knowledge of the CIR, per se.

No prior work has considered how faith in deliberation may condition perceptions of deliberative media. The closest research appears in Gastil et al. (2010), which showed connections among faith in the jury system, willingness to serve on a jury, and public affairs media use. Similarly, we expect that those with more faith in deliberation will perceive not only the pro/con arguments included in the Statement as stronger but also its Key Findings as less biased and more credible.

H3: Relative to those voters with less faith in deliberation, those voters with more faith in deliberation will (a) perceive both the Citizens’ Statement’s pro and con arguments as stronger and (b) perceive the Statement’s Key Findings as less biased and more credible.

Methods

Participants and Procedures

Registered Oregon voters were recruited through Qualtrics, an online survey service that provides Internet survey panels. The original survey was conducted from October 16 to November 7, 2014, following protocols approved by the Pennsylvania State University Human Subjects Division (Gastil et al., 2016). Because Oregon Election Day was November 4, 2014, the survey period started 19 days before and closed three days after that date.

The original survey used a larger sample of 2,077 respondents to conduct a broader evaluation of the CIR (Gastil et al., 2015, 2016). Only a subsample of these respondents (N = 313) had not previously been exposed to the 2014 Citizens’ Statements and were in a survey condition that included measurement of all the variables relevant to our hypotheses. This sample was large enough to detect moderate effect sizes (Cohen, 1988) equivalent to or greater than those found in previous HMP research (Hansen & Kim, 2011). Table 1 provides demographic information about the sample and descriptive statistics for all the measures in this study.

Table 1

Descriptive Statistics for All Variables.

Variable N M SD Items Range Scale reliability (α)
Pro argument strength 313 .06 .91 3 –2 – 2 .76
Con argument strength 313 .05 .95 3 –2 – 2 .79
Perceived bias in Key Findings 313 .09 1.07 1 –2 – 2
Absolute value of perceived bias in Key Findings 313 .76 .76 1 0 – 2
Perceived credibility of Key Findings 313 .13 .67 2 –2 – 2 .82
Awareness of the CIR 313 .52 .40 3 0 – 1 .75
Faith in deliberation 313 .93 .70 3 –2 – 2 .73
Conservatism 313 –.26 1.69 2 –3 – 3 .80
Strength of partisanship* 313 1.43 .94 2 0 – 3
Interest in politics 313 2.30 .82 1 0 – 3
Voting frequency 313 3.32 .94 2 0 – 4 .85
Age (years) 313 48.13 16.68 1 18 – 85
Ballot measure assignment (M90 = 0, M92 = 1) 313 .54 .50 1 0 – 1
Level of education 313 Mode = Some college, or an associate degree 1 1 – 9
Gender (female = 1, male = 0) 312 Female = 214, Male = 98 1 0 – 1
Ethnicity (white = 1, all others = 0) 313 White = 275, All Others = 38 1 0 – 1
Voting preference 313 No = 108, Yes = 111, Unsure = 94 3
  • * This variable is the absolute value of the Conservatism variable.

    One respondent identified as ‘transgender/other,’ resulting in N = 312 for the dichotomy.

    Respondents were only coded as ‘white’ if they only identified with that ethnicity. Respondents who identified as ‘white’ and one or more other ethnicities were not coded as ‘white.’

Survey respondents were randomly assigned to read the Citizens’ Statement for either Measure 90 (n = 144) or Measure 92 (n = 169). Table 2 shows the demographic information and descriptive statistics for the measures in this study broken down by the assigned ballot measure.

Table 2

Descriptive Statistics for Subsamples by Ballot Measure.

Variable Ballot Measure
Measure 90 Measure 92
Pro argument strength –.08
(.87)
.19
(.93)
Con argument strength .14
(.95)
–.03
(.95)
Perceived bias in Key Findings –.10
(.97)
.25
(1.13)
Absolute value of perceived bias in Key Findings .60
(.77)
.90
(.72)
Perceived credibility of Key Findings .07
(.65)
.18
(.67)
Awareness of the CIR .50
(.40)
.54
(.41)
Faith in deliberation .96
(.64)
.90
(.75)
Conservatism –.21
(1.74)
–.30
(1.66)
Strength of partisanship 1.50
(.89)
1.37
(.97)
Interest in politics 2.26
(.83)
2.34
(.81)
Voting frequency 3.42
(.83)
3.23
(1.02)
Age (years) 49.98
(16.74)
46.56
(16.51)
Level of education Mode = Some college, or an associate degree Mode = Some college, or an associate degree
Gender (female = 1, male = 0)* Female = 98
Male = 45
Female = 116
Male = 53
Ethnicity (white = 1, all others = 0) White = 130
All Others = 14
White = 145
All Others = 24
Voting preference on ballot measure No = 58, Yes = 59, Unsure = 27 No = 50, Yes = 52, Unsure = 67
Subsample size (n) 144* 169
  • Cell entries are M and SD (in parentheses) unless otherwise indicated. * One respondent in the Measure 90 subsample identified as ‘transgender/other,’ resulting in n = 143 for the gender dichotomy in the Measure 90 subsample.

Measures

Voting preference

Before being exposed to the Citizens’ Statement on the ballot measure to which they were assigned, respondents were asked how they had voted or were planning to vote on that ballot measure. Those who took a stance for or against the ballot measure were asked if they had ‘leaned toward yes,’ ‘leaned toward no,’ or ‘leaned neither way.’ Based on their answers, they were placed in one of three categories: ‘Yes’ (n = 111), ‘No’ (n = 108), or Undecided (n = 94).

Confidence in Knowledge of the Citizens’ Initiative Review

Voters’ confidence in their knowledge of the CIR was operationalized as their satisfaction with the amount of information provided about it in the Statement. This measure used an index consisting of three ‘yes’ (1) or ‘no’ (0) items asking voters whether the preface of the Statement provided enough information about (a) how the citizen panelists were selected, (b) how the CIR panel-process was conducted, and (c) who sponsored and organized the CIR. Averaging these items created a reliable index of Confidence in Knowledge about the CIR that ranged from 0 to 1 (M = .52, SD = .40, α = .75).

Faith in deliberation

Three statements used in previous deliberation research (Brinker et al., 2015; Knobloch & Gastil, 2015) measured voters’ faith in deliberation using five-point Likert scales ranging from ‘strongly disagree’ (–2) to ‘strongly agree’ (2). The three items read as follows: ‘even people who strongly disagree can make sound decisions if they sit down and talk’; ‘everyday people from different parties can have very civil conversations about politics’; and ‘the first step in solving our common problems is to discuss them together.’ Averaging these three items resulted in a reliable scale (M = .93, SD = .70, α = .73). A fourth reverse-coded item was dropped due to its strong, negative drag on reliability.

Perceived strength of pro and con arguments

We measured perceived strength of pro and con arguments in line with prior HMP studies (Gunther & Liebhart, 2006; Gunther & Schmitt, 2004). As in Matheson and Dursun (2001), these items rated how well the Statement represented the pro and con sides of the issue. For both pro and con arguments, we used an index of three items on four-point scales that asked respondents to rate arguments from ‘very weak’ to ‘very strong,’ from ‘not at all relevant’ to ‘completely relevant,’ and from ‘not at all trustworthy’ to ‘completely trustworthy.’ With ‘don’t know’ responses serving as a midpoint, scores were averaged to make reliable scales from –2 to +2 for Pro Argument Strength (M = .06, SD = .91, α = .76) and Con Argument Strength (M = .05, SD = .95, α = .79).

Perceived bias in Key Findings

To measure perceived bias in the Key Findings, we used two versions of one measure mirroring those used in previous HMP research (Gunther & Schmitt, 2004). To measure the extent to which voters perceived Key Findings as biased in a particular direction (i.e., for or against the ballot measure), a single item asked respondents to rate the Key Findings’ bias with a five-point scale ranging from ‘strongly against’ (–2) to ‘strongly in favor’ (2), with ‘generally neutral’ (0) as the midpoint (M = .09, SD = 1.07). To measure the extent to which respondents perceived the Key Findings as biased in either direction, the previous item was transformed by calculating its absolute value (M = .76, SD = .76).

Perceived credibility of Key Findings

We measured the perceived credibility of the Key Findings by following a method used in previous HMP research (Gunther & Liebhart, 2006; Gunther & Schmitt, 2004) and other studies of mass media perceptions (Metzger, 2007; Wathen & Burkell, 2002). We used an index consisting of two items using five-point scales that asked respondents how ‘accurate’ and ‘relevant’ they considered the Key Findings. Responses ranged from ‘not at all’ (–2) to ‘completely’ (2), with ‘don’t know’ as a midpoint (M = .13, SD = .67, α = .82).

Finally, we created variables to measure voters’ partisan and ideological self-identification, political attitudes, and voting behavior. These appear in Table 3.

Table 3

Descriptions and Item Wording for Additional Variables.

Additional Variable Variable Description Item(s)
Conservatism Measures partisan and ideological commitment (i.e., both magnitude and direction); higher values indicate higher conservative commitment in particular 7-point scale measuring partisanship from ‘strong Democrat’ (–3) to ‘strong Republican’ (3)
7-point scale measuring ideology from ‘extremely liberal’ (–3) to ‘extremely conservative’ (3)
Strength of partisanship Measures strength of partisan and ideological commitment (i.e., just magnitude); higher values indicate higher commitment in general Absolute value of Conservatism; 4-point scale ranging from 0 to 3
Interest in politics Measures voters’ interest in politics and public affairs; higher values indicate higher interest 4-point scale ranging from ‘hardly at all’ (0) to ‘most of the time’ (3) asking ‘How often would you say you follow what’s going on in government and public affairs?’
Voting frequency Measures how often voter votes in different elections; higher values indicate higher frequency 5-point scale ranging from ‘never’ (0) to ‘always’ (4) asking ‘how often have you voted in local and state primary elections?’
5-point scale ranging from ‘never’ (0) to ‘always’ (4) asking ‘how often have you voted in statewide general elections?’

Results

Statistical Approach

We tested our hypotheses using multiple-linear regression analyses including predictors corresponding to each hypothesis. This approach necessitated evaluating each sub-hypothesis simultaneously, rather than proceeding through each hypothesis separately. Each regression model assesses the predictive power of voters’ preexisting preferences on the ballot measure (H1), their confidence in their knowledge of the CIR (H2), and their faith in deliberation (H3). What distinguishes each model is its criterion. The models measure voters’ perceptions of the three parts of the Citizens’ Statement: pro arguments, con arguments, and Key Findings.

We needed to create regression models with different criteria to test our hypotheses when it came to perceptions of bias in the Key Findings. To test H1, we created a model with perceived bias in the Key Findings as its criterion. That permitted testing whether voters’ preexisting preferences predicted perceptions of that section as biased in a particular direction for or against them. To test H2 and H3, we created a model with the absolute value of perceived bias in the Key Findings as its criterion. Thus, we could test H2 and H3 and see whether voters’ confidence in their knowledge of the CIR and faith in deliberation predicted perceptions of that section as unbiased in general. We then included both these criteria (i.e., perceived bias in the Key Findings and the absolute value thereof) in our final model predicting perceived credibility of the Key Findings. After all, an HMP approach predicts that mass media considered more biased (either in general or against preexisting preferences) will be viewed as less credible (Arpan & Raney, 2003).

Throughout our results, we report regression coefficients in their standardized forms (β). The statistical tables show their unstandardized coefficients, standard errors, and adjusted coefficients of determination (R2adjusted).

Perceived Strength of Pro and Con Arguments

The first regression model tested predictors of Pro Argument Strength, R2 = .20, F(13, 298) = 5.71, p < .001. Table 4 shows that it was significantly predicted by voters’ preexisting preferences, such that those who supported the ballot measure gave more favorable ratings to the pro arguments relative to those who did not (β = .16, p = .005). Those who opposed it gave less favorable ratings to them relative to those who did not (β = –.19, p = .001), though these findings were both in the direction opposite to H1. Those voters with higher confidence in their knowledge of the CIR (β = .13, p = .013) and faith in deliberation (β = .15, p = .006) gave the pro arguments more favorable ratings, supporting both H2 and H3.

Table 4

Multiple Linear Regression Analyses for Pro and Con Argument Strength.

Predictor Variable Dependent Variable
Pro Argument Strength Con Argument Strength
Age (years) –.004
(.003)
–.001
(.004)
Gender (female = 1) .36**
(.11)
–.13
(.11)
Level of education .05
(.03)
.12**
(.04)
Ethnicity (white = 1) –.02
(.15)
.02
(.17)
Ballot measure assignment (M92 = 1) .22*
(.10)
–.16
(.11)
Voting preference of ‘yes’ (yes = 1; no/undecided = 0) .30**
(.11)
–.17
(.12)
Voting preference of ‘no’ (no = 1; yes/undecided = 0) –.36**
(.11)
.18
(.12)
Awareness of the CIR .31*
(.12)
.40**
(.13)
Faith in deliberation .20**
(.07)
.20**
(.08)
Strength of partisanship –.02
(.06)
–.03
(.06)
Conservatism –.07*
(.03)
–.05
(.03)
Interest in politics .06
(.07)
–.06
(.07)
Voting frequency –.11
(.06)
–.03
(.07)
R2 and (R2adjusted) .20**
(.16)
.12**
(.09)
  • Note: N = 312. * p < .05, ** p < .01, two-tailed tests. Main row entries are unstandardized regression coefficients, with their standard errors in parentheses.

The second regression model tested predictors of Con Argument Strength, R2 = .12, F(13, 298) = 3.21, p < .001. Table 4 shows that, unlike its aforementioned counterpart, Con Argument Strength was not significantly predicted by voters’ preexisting preferences, meaning H1 was not supported. As with Pro Argument Strength, voters with higher confidence in their knowledge of the CIR (β = .17, p = .002) and faith in deliberation (β = .15, p = .008) gave the con arguments more favorable ratings, supporting both H2 and H3.

Perceptions of Key Findings

The third and fourth regression models predicted perceived bias in the Key Findings (R2 = .07, F(13, 298) = 1.75, p = .050) and its absolute value (R2 = .10, F(13, 298) = 2.39, p = .005). Table 5 shows that neither of these criteria was significantly predicted by voters’ preexisting preferences, confidence in their knowledge of the CIR, or faith in deliberation. Thus, these findings did not support any of our hypotheses.

Table 5

Multiple Linear Regression Analyses for Perceived Bias and the Credibility of Key Findings.

Predictor Variable Dependent Variable
Perceived Bias in Key Findings Absolute value of Perceived Bias in Key Findings Perceived Credibility of Key Findings
Age (years) –.01
(.004)
–.001
(.003)
–.003
(.002)
Gender (female = 1) .12
(.13)
.001
(.09)
.14
(.08)
Level of education –.06
(.04)
–.01
(.03)
.03
(.03)
Ethnicity (white = 1) –.13
(.19)
.20
(.13)
.01
(.12)
Ballot measure assignment (Measure 92 = 1, Measure 90 = 0) .34**
(.12)
.33**
(.09)
.07
(.08)
Voting preference of ‘yes’ (yes = 1; no/undecided = 0) .15
(.14)
.13
(.09)
.08
(.08)
Voting preference of ‘no’ (no = 1; yes/undecided = 0) –.01
(.14)
–.08
(.09)
–.09
(.08)
Awareness of the CIR .21
(.15)
.01
(.11)
.36**
(.09)
Faith in deliberation –.13
(.09)
.05
(.06)
.12*
(.05)
Strength of partisanship .11
(.07)
.05
(.05)
.03
(.04)
Conservatism .03
(.04)
.02
(.03)
–.06**
(.02)
Interest in politics .04
(.09)
.07
(.06)
–.03
(.05)
Voting frequency .03
(.08)
.11*
(.05)
–.04
(.05)
Perceived bias in Key Findings .03
(.04)
Absolute value of perceived bias in Key Findings .02
(.05)
R2 and (R2adjusted) .07
(.03)
.10**
(.06)
.14**
(.10)
  • Note: N = 312. * p < .05, ** p < .01, two-tailed tests. Main row entries are unstandardized regression coefficients, with their standard errors in parentheses.

The last regression model predicted perceived credibility of the Key Findings, R2 = .14, F(15, 296) = 3.26, p < .001. Table 5 shows that, though this criterion was not significantly predicted by voters’ preexisting preferences, those with higher confidence in their knowledge of the CIR (β = .22, p < .001) and faith in deliberation (β = .12, p = .029) did consider the Key Findings more credible. Thus, results supported both H2 and H3 but not H1.

Summary

Although none of our models supported H1, the findings generally supported both H2 and H3. In other words, voters’ preexisting preferences on a ballot measure did not predict how they perceived the Citizens’ Statement—at least not in the anticipated direction. By contrast, voters’ confidence in their knowledge of the CIR and faith in deliberation positively predicted how they perceived each of the Statement’s parts, except for perceptions of the Key Findings as unbiased.

Discussion

Our study of the 2014 Oregon CIR found no evidence of hostile media perceptions occurring as a result of reading Citizens’ Statements. In fact, the most relevant significant effects we found were the opposite of an HMP prediction. The Statement’s pro arguments were perceived as stronger by voters supporting the ballot measure and weaker by those opposing it. These findings suggest that voters perceived the arguments relative to their preexisting views in terms of how much they agreed with those views rather than how well those views were represented in the Statement. As such, these results are more consistent with the research on confirmation bias (Taber & Lodge, 2006) or ‘biased assimilation’ (see Hergovich, Schott & Burger, 2010), wherein people perceive information ‘confirming their preexisting attitude as more convincing than’ information ‘disconfirming their preexisting attitude’ and vice versa (Munro & Ditto, 1997, p. 636). Based on these findings, we suggest that future research study of minipublics from a confirmation bias perspective (Már & Gastil, 2020) rather than one based on HMP.

We did find ample evidence in support of hypotheses derived from deliberative theory. Voters with more confidence in their knowledge of the CIR and more faith in the efficacy of public deliberation generally gave higher quality ratings to both pro and con arguments in the Citizens’ Statements, and they rated its Key Findings as more credible. These findings complement those of previous minipublic studies (Boulianne, 2018; Cutler et al., 2008; Fournier et al., 2011; Már & Gastil, 2020; Niessen, 2019) by showing how deliberative media’s utility hinges on a minipublic’s transparency and openness and its internal deliberative quality (Carson, 2011; Karpowitz & Raphael, 2014; O’Flynn & Sood, 2014). Therefore, future minipublic practitioners should consider how such events can be designed transparently (particularly with respect to their deliberative media). Future research should explore where and how citizens’ faith in public deliberation may be bolstered in a deliberative system (Manosevitch, 2019) so as to boost minipublics’ deliberative quality.

That said, this study had considerable limitations. First, it was a secondary analysis of cross-sectional data collected for another purpose (see Gastil et al., 2015). Thus, we could not make causal inferences about relationships we found between variables. In particular, we did not measure perceptions of the Statement’s reach (i.e., how many people read it). Future research on minipublics should examine how their perceived reach affects public perceptions of their outputs. Second, the survey design did not allow us to measure how carefully voters actually read the Citizens’ Statement. A more careful study would discern which of its parts held their attention and for how long. Finally, the survey only allowed us to examine perceptions of CIR messages about ballot measures on electoral processes and food labeling. Since HMP may be issue-dependent (D’Alessio, 2003), future research may find that results do not generalize across a wider range of policy questions.

Overall, our study showed the CIR’s Citizens’ Statements to be of high deliberative quality. This supports the idea that minipublics like it represent ‘trusted information proxies’ that citizens can utilize in forming opinions before making decisions (MacKenzie & Warren, 2012; Warren & Gastil, 2015). People may not perceive the deliberative media generated by a minipublic with the same hostility as they do for other mass media. Nevertheless, we did find confirmation bias in the ratings of some arguments, and this could limit the efficacy of minipublic reports. Therefore, we recommend continued study of how minipublics can most effectively generate messages that enhance the larger deliberative system.

Acknowledgements

The data presented herein were collected using funds from a National Science Foundation grant through the Decision, Risk and Management Sciences program (Award #1357276/1357444). The views expressed in this chapter are solely those of the authors. The authors acknowledge Michael Schmierbach of the Donald P. Bellisario College of Communications at the Pennsylvania State University and John Rountree of the Department of Communication Studies at the University of Houston-Downtown, both of whom provided comments on earlier versions of this article.

Competing Interests

The authors have no competing interests to declare.

References

1 Arceneaux, K., & Johnson, M. (2015). How does media choice affect hostile media perceptions? Evidence from participant preference experiments. Journal of Experimental Political Science, 2, 12–25. DOI:  http://doi.org/10.1017/xps.2014.10

2 Arceneaux, K., Johnson, M., & Murphy, C. (2012). Polarized political communication, oppositional media hostility, and selective exposure. The Journal of Politics, 74, 174–186. DOI:  http://doi.org/10.1017/S002238161100123X

3 Ardèvol-Abreu, A., & Gil De Zúñiga, H. (2017). Effects of editorial media bias perception and media trust on the use of traditional, citizen, and social media news. Journalism & Mass Communication Quarterly, 94, 703–724. DOI:  http://doi.org/10.1177/1077699016654684

4 Ariyanto, A., Hornsey, M. J., & Gallois, C. (2007). Group allegiances and perceptions of media bias: Taking into account both the perceiver and the source. Group Processes & Intergroup Relations, 10, 266–279. DOI:  http://doi.org/10.1177/1368430207074733

5 Arlt, D., & Wolling, J. (2016). The refugees: Threatening or beneficial? Exploring the effects of positive and negative attitudes and communication on hostile media perceptions. Global Media Journal: German Edition, 6, 1–21.

6 Arpan, L. M., & Raney, A. A. (2003). An experimental investigation of news source and the hostile media effect. Journalism & Mass Communication Quarterly, 80, 265–281. DOI:  http://doi.org/10.1177/107769900308000203

7 Bächtiger, A., Setälä, M., & Grönlund, K. (2014). Towards a new era of deliberative mini-publics. In K. Grönlund, A. Bächtiger, & M. Setälä (Eds.), Deliberative mini-publics: Involving citizens in the democratic process (pp. 225–245). ECPR Press.

8 Borah, P., Thorson, K., & Hwang, H. (2015). Causes and consequences of selective exposure among political blog readers: The role of hostile media perception in motivated media use and expressive participation. Journal of Information Technology & Politics, 12, 186–199. DOI:  http://doi.org/10.1080/19331681.2015.1008608

9 Boulianne, S. (2018). Mini-publics and public opinion: Two survey-based experiments. Political Studies, 66, 119–136. DOI:  http://doi.org/10.1177/0032321717723507

10 Brinker, D. L., Gastil, J., & Richards, R. C. (2015). Inspiring and informing citizens online: A media richness analysis of varied civic education modalities. Journal of Computer-Mediated Communication, 20, 504–519. DOI:  http://doi.org/10.1111/jcc4.12128

11 Caluwaerts, D., & Reuchamps, M. (2015). Strengthening democracy through bottom-up deliberation: An assessment of the internal legitimacy of the G1000 project. Acta Politica, 50, 151–170. DOI:  http://doi.org/10.1057/ap.2014.2

12 Carcasson, M., & Sprain, L. (2010). Key aspects of the deliberative democracy movement. Public Sector Digest, 1–5.

13 Carr, D. J., Barnidge, M., Lee, B. G., & Tsang, S. J. (2014). Cynics and skeptics: Evaluating the credibility of mainstream and citizen journalism. Journalism & Mass Communication Quarterly, 91, 452–470. DOI:  http://doi.org/10.1177/1077699014538828

14 Carson, L. (2011). Dilemmas, disasters, and deliberative democracy: Getting the public back into policy. Griffith Review, 32, 33–40.

15 Ceron, A., & Memoli, V. (2015). Trust in government and media slant: A cross-sectional analysis of media effects in twenty-seven European countries. The International Journal of Press/Politics, 20, 339–359. DOI:  http://doi.org/10.1177/1940161215572634

16 Chambers, S. (2009). Rhetoric and the public sphere: Has deliberative democracy abandoned mass democracy? Political Theory, 37, 323–350. DOI:  http://doi.org/10.1177/0090591709332336

17 Chia, S. C., Yong, S. Y. J., Wong, Z. W. D., & Koh, W. L. (2007). Personal bias or government bias? Testing the hostile media effect in a regulated press system. International Journal of Public Opinion Research, 19, 313–330. DOI:  http://doi.org/10.1093/ijpor/edm011

18 Choi, J., Yang, M., & Chang, J. J. (2009). Elaboration of the hostile media phenomenon: The roles of involvement, media skepticism, congruency of perceived media influence, and perceived opinion climate. Communication Research, 36, 54–75. DOI:  http://doi.org/10.1177/0093650208326462

19 Christen, C. T., Kannaovakun, P., & Gunther, A. C. (2002). Hostile media perceptions: Partisan assessments of press and public during the 1997 United Parcel Service strike. Political Communication, 19, 423–436. DOI:  http://doi.org/10.1080/10584600290109988

20 Chung, M., Munno, G. J., & Moritz, B. (2015). Triggering participation: Exploring the effects of third-person and hostile media perceptions on online participation. Computers in Human Behavior, 53, 452–461. DOI:  http://doi.org/10.1016/j.chb.2015.06.037

21 Coe, K., Tewksbury, D., Bond, B. J., Drogos, K. L., Porter, R. W., Yahn, A., & Zhang, Y. (2008). Hostile news: Partisan use and perceptions of cable news programming. Journal of Communication, 58, 201–219. DOI:  http://doi.org/10.1111/j.1460-2466.2008.00381.x

22 Cohen, G. L. (2003). Party over policy: The dominating impact of group influence on political beliefs. Journal of Personality and Social Psychology, 85, 808–822. DOI:  http://doi.org/10.1037/0022-3514.85.5.808

23 Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). DOI:  http://doi.org/10.4324/9780203771587

24 Crosby, N., & Nethercutt, D. (2005). Citizens Juries: Creating a trustworthy voice of the people. In J. Gastil & P. Levine (Eds.), The deliberative democracy handbook: Strategies for effective civic engagement in the twenty-first century (pp. 111–119). San Francisco, CA: Jossey-Bass.

25 Curato, N., & Böker, M. (2016). Linking mini-publics to the deliberative system: A research agenda. Policy Sciences, 49, 173–190. DOI:  http://doi.org/10.1007/s11077-015-9238-5

26 Cutler, F., Johnston, R., Carty, R. K., Blais, A., & Fournier, P. (2008). Deliberation, information, and trust: The British Columbia Citizens’ Assembly as agenda setter. In M. E. Warren & H. Pearse (Eds.), Designing deliberative democracy: The British Columbia Citizens’ Assembly (pp. 166–191). Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511491177.010

27 D’Alessio, D. (2003). An experimental examination of readers’ perceptions of media bias. Journalism & Mass Communication Quarterly, 80, 282–294. DOI:  http://doi.org/10.1177/107769900308000204

28 Devillers, S., Vrydagh, J., Caluwaerts, D., & Reuchamps, M. (2020). Invited but not selected: The perceptions of a mini-public by randomly invited–but not selected–citizens (No. 4; ConstDelib Working Paper Series, pp. 1–21).

29 Dryzek, J. S. (2009). Democratization as deliberative capacity building. Comparative Political Studies, 42, 1379–1402. DOI:  http://doi.org/10.1177/0010414009332129

30 Dryzek, J. S., Goodin, R. E., Tucker, A., & Reber, B. (2009). Promethean elites encounter precautionary publics: The case of GM foods. Science, Technology, & Human Values, 34, 263–288. DOI:  http://doi.org/10.1177/0162243907310297

31 Duck, J. M., Terry, D. J., & Hogg, M. A. (1998). Perceptions of a media campaign: The role of social identity and the changing intergroup context. Personality and Social Psychology Bulletin, 24, 3–16. DOI:  http://doi.org/10.1177/0146167298241001

32 Eveland, W. P., & Shah, D. V. (2003). The impact of individual and interpersonal factors on perceived news media bias. Political Psychology, 24, 101–117. DOI:  http://doi.org/10.1111/0162-895X.00318

33 Farrell, D. M., & Suiter, J. (2019). Reimagining democracy: Lessons in deliberative democracy from the Irish front line. Ithaca, NY: Cornell University Press. DOI:  http://doi.org/10.7591/9781501749346

34 Feldman, L. (2011). Partisan differences in opinionated news perceptions: A test of the hostile media effect. Political Behavior, 33, 407–432. DOI:  http://doi.org/10.1007/s11109-010-9139-4

35 Felicetti, A., Niemeyer, S., & Curato, N. (2016). Improving deliberative participation: Connecting mini-publics to deliberative systems. European Political Science Review, 8, 427–448. DOI:  http://doi.org/10.1017/S1755773915000119

36 Fishkin, J. S. (2018). Democracy when the people are thinking: Revitalizing our politics through public deliberation. Oxford: Oxford University Press. DOI:  http://doi.org/10.1093/oso/9780198820291.001.0001

37 Fishkin, J. S., & Luskin, R. C. (2005). Experimenting with a democratic ideal: Deliberative polling and public opinion. Acta Politica, 40, 284–298. DOI:  http://doi.org/10.1057/palgrave.ap.5500121

38 Fournier, P., van der Kolk, H., Carty, R. K., Blais, A., & Rose, J. (2011). When citizens decide: Lessons from citizen assemblies on electoral reform. Oxford University Press. DOI:  http://doi.org/10.1093/acprof:oso/9780199567843.001.0001

39 Fung, A. (2003). Survey article: Recipes for public spheres: Eight institutional design choices and their consequences. Journal of Political Philosophy, 11, 338–367. DOI:  http://doi.org/10.1111/1467-9760.00181

40 Fung, A. (2005). Deliberation before the revolution: Toward an ethics of deliberative democracy in an unjust world. Political Theory, 33, 397–419. DOI:  http://doi.org/10.1177/0090591704271990

41 Gastil, J., Deess, E. P., Weiser, P. J., & Simmons, C. (2010). The jury and democracy: How jury deliberation promotes civic engagement and political participation. New York: Oxford University Press.

42 Gastil, J., & Knobloch, K. (2010). Evaluation report to the Oregon State Legislature on the 2010 Oregon Citizens’ Initiative Review. University of Washington.

43 Gastil, J., & Knobloch, K. (2020). Hope for democracy: How citizens can bring reason back into politics. Oxford University Press. DOI:  http://doi.org/10.1093/oso/9780190084523.001.0001

44 Gastil, J., Knobloch, K. R., Reedy, J., Henkels, M., & Cramer, K. (2018). Assessing the electoral impact of the 2010 Oregon Citizens’ Initiative Review. American Politics Research, 46, 534–563. DOI:  http://doi.org/10.1177/1532673X17715620

45 Gastil, J., Knobloch, K. R., & Richards, R. C. (2015). Empowering voters through better information: Analysis of the Citizens’ Initiative Review, 2010–2014. State College: Pennsylvania State University. DOI:  http://doi.org/10.13140/rg.2.1.5007.8881

46 Gastil, J., Richards, R. C., & Knobloch, K. (2014). Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections. International Journal of Communication, 8, 62–89.

47 Gastil, J., Rosenzweig, E., Knobloch, K. R., & Brinker, D. (2016). Does the public want mini-publics? Voter responses to the Citizens’ Initiative Review. Communication and the Public, 1, 174–192. DOI:  http://doi.org/10.1177/2057047316648329

48 Gearhart, S., Moe, A., & Zhang, B. (2020). Hostile media bias on social media: Testing the effect of user comments on perceptions of news bias and credibility. Human Behavior and Emerging Technologies, 1–8. DOI:  http://doi.org/10.1002/hbe2.185

49 Giffin, K. (1967). The contribution of studies of source credibility to a theory of interpersonal trust in the communication process. Psychological Bulletin, 68, 104. DOI:  http://doi.org/10.1037/h0024833

50 Giner-Sorolla, R., & Chaiken, S. (1994). The causes of hostile media judgments. Journal of Experimental Social Psychology, 30, 165–180. DOI:  http://doi.org/10.1006/jesp.1994.1008

51 Goodin, R. E., & Dryzek, J. S. (2006). Deliberative impacts: The macro-political uptake of mini-publics. Politics & Society, 34, 219–244. DOI:  http://doi.org/10.1177/0032329206288152

52 Grönlund, K., Bächtiger, A., & Setälä, M. (Eds.) (2014). Deliberative mini-publics: Involving citizens in the democratic process. Colchester: ECPR Press.

53 Gunther, A. C. (1992). Biased press or biased public? Attitudes toward media coverage of social groups. Public Opinion Quarterly, 56, 147–167. DOI:  http://doi.org/10.1086/269308

54 Gunther, A. C., & Chia, S. C.-Y. (2001). Predicting pluralistic ignorance: The hostile media perception and its consequences. Journalism & Mass Communication Quarterly, 78, 688–701. DOI:  http://doi.org/10.1177/107769900107800405

55 Gunther, A. C., & Christen, C. T. (2002). Projection or persuasive press? Contrary effects of personal opinion and perceived news coverage on estimates of public opinion. Journal of Communication, 52, 177–195. DOI:  http://doi.org/10.1111/j.1460-2466.2002.tb02538.x

56 Gunther, A. C., Christen, C. T., Liebhart, J. L., & Chia, S. C.-Y. (2001). Congenial public, contrary press, and biased estimates of the climate of opinion. Public Opinion Quarterly, 65, 295–320. DOI:  http://doi.org/10.1086/322846

57 Gunther, A. C., & Liebhart, J. L. (2006). Broad reach or biased source? Decomposing the hostile media effect. Journal of Communication, 56, 449–466. DOI:  http://doi.org/10.1111/j.1460-2466.2006.00295.x

58 Gunther, A. C., Miller, N., & Liebhart, J. L. (2009). Assimilation and contrast in a test of the hostile media effect. Communication Research, 36, 747–764. DOI:  http://doi.org/10.1177/0093650209346804

59 Gunther, A. C., & Schmitt, K. (2004). Mapping boundaries of the hostile media effect. Journal of Communication, 54, 55–70. DOI:  http://doi.org/10.1111/j.1460-2466.2004.tb02613.x

60 Hansen, G. J., & Kim, H. (2011). Is the media biased against me? A meta-analysis of the hostile media effect research. Communication Research Reports, 28, 169–179. DOI:  http://doi.org/10.1080/08824096.2011.565280

61 Hartmann, T., & Tanis, M. (2013). Examining the hostile media effect as an intergroup phenomenon: The role of ingroup identification and status. Journal of Communication, 63, 535–555. DOI:  http://doi.org/10.1111/jcom.12031

62 Hauser, G. A. (2007). Vernacular discourse and the epistemic dimension of public opinion. Communication Theory, 17, 333–339. DOI:  http://doi.org/10.1111/j.1468-2885.2007.00299.x

63 Hendriks, C. M. (2006). Integrated deliberation: Reconciling civil society’s dual role in deliberative democracy. Political Studies, 54, 486–508. DOI:  http://doi.org/10.1111/j.1467-9248.2006.00612.x

64 Herbert, D., & Hansen, J. (2018). ‘You are no longer my flesh and blood’: Social media and the negotiation of a hostile media frame by Danish converts to Islam. Nordic Journal of Religion and Society, 31, 4–21. DOI:  http://doi.org/10.18261/issn.1890-7008-2018-01-01

65 Hergovich, A., Schott, R., & Burger, C. (2010). Biased evaluation of abstracts depending on topic and conclusion: Further evidence of a confirmation bias within scientific psychology. Current Psychology, 29, 188–209. DOI:  http://doi.org/10.1007/s12144-010-9087-5

66 Himmelroos, S. (2017). Discourse quality in deliberative citizen forums–A comparison of four deliberative mini-publics. Journal of Public Deliberation, 13, 1–28. DOI:  http://doi.org/10.16997/jdd.269

67 Hopke, J. E., Highland, E., Rojas, H., & Gunther, A. C. (2010, August). Trusting institutions: Citizen journalism and the hostile media phenomena. Paper presented at the annual conference of the Association for Education in Journalism & Mass Communication, Denver, CO.

68 Houston, J. B., Hansen, G. J., & Nisbett, G. S. (2011). Influence of user comments on perceptions of media bias and third-person effect in online news. Electronic News, 5, 79–92. DOI:  http://doi.org/10.1177/1931243111407618

69 Hovland, C. I., & Weiss, W. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly, 15, 635–650. DOI:  http://doi.org/10.1086/266350

70 Hwang, H., Pan, Z., & Sun, Y. (2008). Influence of hostile media perception on willingness to engage in discursive activities: An examination of mediating role of media indignation. Media Psychology, 11, 76–97. DOI:  http://doi.org/10.1080/15213260701813454

71 Ingham, S., & Levin, I. (2018a). Can deliberative mini publics influence public opinion? Theory and experimental evidence. Political Research Quarterly, 71, 654–667. DOI:  http://doi.org/10.1177/1065912918755508

72 Ingham, S., & Levin, I. (2018b). Effects of deliberative minipublics on public opinion: Experimental evidence from a survey on Social Security reform. International Journal of Public Opinion Research, 30, 51–78. DOI:  http://doi.org/10.1093/ijpor/edw030

73 Johnson, T. J., & Kaye, B. K. (1998). Cruising is believing?: Comparing internet and traditional sources on media credibility measures. Journalism & Mass Communication Quarterly, 75, 325–340. DOI:  http://doi.org/10.1177/107769909807500208

74 Johnson, T. J., & Kaye, B. K. (2004). Wag the blog: How reliance on traditional media and the Internet influence credibility perceptions of weblogs among blog users. Journalism & Mass Communication Quarterly, 81, 622–642. DOI:  http://doi.org/10.1177/107769900408100310

75 Johnson, T. J., Kaye, B. K., Bichard, S. L., & Wong, W. J. (2007). Every blog has its day: Politically-interested Internet users’ perceptions of blog credibility. Journal of Computer-Mediated Communication, 13, 100–122. DOI:  http://doi.org/10.1111/j.1083-6101.2007.00388.x

76 Karpowitz, C. F., & Raphael, C. (2014). Deliberation, democracy, and civic forums: Improving equality and publicity. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9781107110212

77 Kim, M. (2015). Partisans and controversial news online: Comparing perceptions of bias and credibility in news content from blogs and mainstream media. Mass Communication and Society, 18, 17–36. DOI:  http://doi.org/10.1080/15205436.2013.877486

78 Kiousis, S. (2001). Public trust or mistrust? Perceptions of media credibility in the information age. Mass Communication & Society, 4, 381–403. DOI:  http://doi.org/10.1207/S15327825MCS0404_4

79 Knobloch, K. R., Barthel, M. L., & Gastil, J. (2020). Emanating effects: The impact of the Oregon Citizens’ Initiative Review on voters’ political efficacy. Political Studies, 68, 426–445. DOI:  http://doi.org/10.1177/0032321719852254

80 Knobloch, K. R., & Gastil, J. (2015). Civic (re)socialization: The educative effects of deliberative participation. Politics, 35, 183–200. DOI:  http://doi.org/10.1111/1467-9256.12069

81 Knobloch, K. R., Gastil, J., Feller, T., & Richards, R. C. (2014). Empowering citizen deliberation in direct democratic elections: A field study of the 2012 Oregon Citizens’ Initiative Review. The Journal of Field Actions, Field Actions Science Reports, 1–10.

82 Knobloch, K. R., Gastil, J., Reedy, J., & Walsh, K. C. (2013). Did they deliberate? Applying an evaluative model of democratic deliberation to the Oregon Citizens’ Initiative Review. Journal of Applied Communication Research, 41, 105–125. DOI:  http://doi.org/10.1080/00909882.2012.760746

83 Lafont, C. (2015). Deliberation, participation, and democratic legitimacy: Should deliberative mini-publics shape public policy? Journal of Political Philosophy, 23, 40–63. DOI:  http://doi.org/10.1111/jopp.12031

84 Lee, E.-J. (2012). That’s not the way it is: How user-generated comments on the news affect perceived media bias. Journal of Computer-Mediated Communication, 18, 32–45. DOI:  http://doi.org/10.1111/j.1083-6101.2012.01597.x

85 Lee, T. K., Kim, Y., & Coe, K. (2018). When social media become hostile media: An experimental examination of news sharing, partisanship, and follower count. Mass Communication and Society, 21, 450–472. DOI:  http://doi.org/10.1080/15205436.2018.1429635

86 Lee, T.-T. (2010). Why they don’t trust the media: An examination of factors predicting trust. American Behavioral Scientist, 54, 8–21. DOI:  http://doi.org/10.1177/0002764210376308

87 Lin, M.-C., Haridakis, P. M., & Hanson, G. (2016). The role of political identity and media selection on perceptions of hostile media bias during the 2012 presidential campaign. Journal of Broadcasting & Electronic Media, 60, 425–447. DOI:  http://doi.org/10.1080/08838151.2016.1203316

88 Luskin, R. C., Fishkin, J. S., & Jowell, R. (2002). Considered opinions: Deliberative polling in Britain. British Journal of Political Science, 32, 455–487. DOI:  http://doi.org/10.1017/S0007123402000194

89 MacKenzie, M. K., & Warren, M. E. (2012). Two trust-based uses of minipublics in democratic systems. In J. Parkinson & J. Mansbridge (Eds.), Deliberative systems: Deliberative democracy at the large scale (pp. 95–124). Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9781139178914.006

90 Manosevitch, I. (2019). Deliberative pedagogy in a conflicted society: Cultivating deliberative attitudes among Israeli college students. Higher Education, 78, 745–760. DOI:  http://doi.org/10.1007/s10734-019-00368-6

91 Már, K., & Gastil, J. (2020). Tracing the boundaries of motivated reasoning: How deliberative minipublics can improve voter knowledge. Political Psychology, 41, 107–127. DOI:  http://doi.org/10.1111/pops.12591

92 Matheson, K., & Dursun, S. (2001). Social identity precursors to the hostile media phenomenon: Partisan perceptions of coverage of the Bosnian conflict. Group Processes & Intergroup Relations, 4, 116–125. DOI:  http://doi.org/10.1177/1368430201004002003

93 Metzger, M. J. (2007). Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58, 2078–2091. DOI:  http://doi.org/10.1002/asi.20672

94 Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210–220. DOI:  http://doi.org/10.1016/j.pragma.2013.07.012

95 Milewicz, K. M., & Goodin, R. E. (2018). Deliberative capacity building through international organizations: The case of the Universal Periodic Review of human rights. British Journal of Political Science, 48, 513–533. DOI:  http://doi.org/10.1017/S0007123415000708

96 Moehler, D. C., & Singh, N. (2011). Whose news do you trust? Explaining trust in private versus public media in Africa. Political Research Quarterly, 64, 276–292. DOI:  http://doi.org/10.1177/1065912909349624

97 Morris, J. S. (2007). Slanted objectivity? Perceived media bias, cable news exposure, and political attitudes. Social Science Quarterly, 88, 707–728. DOI:  http://doi.org/10.1111/j.1540-6237.2007.00479.x

98 Munro, G. D., & Ditto, P. H. (1997). Biased assimilation, attitude polarization, and affect in reactions to stereotype-relevant scientific information. Personality and Social Psychology Bulletin, 23, 636–653. DOI:  http://doi.org/10.1177/0146167297236007

99 Niemeyer, S. (2011). The emancipatory effect of deliberation: Empirical lessons from mini-publics. Politics & Society, 39, 103–140. DOI:  http://doi.org/10.1177/0032329210395000

100 Niemeyer, S. (2014). Scaling up deliberation to mass publics: Harnessing mini-publics in a deliberative system. In K. Grönlund, A. Bächtiger, & M. Setälä (Eds.), Deliberative mini-publics: Involving citizens in the democratic process (pp. 177–202). Colchester: ECPR Press.

101 Niemeyer, S., & Jennstål, J. (2018). Scaling up deliberative effects: Applying lessons of mini-publics. In A. Bächtiger, J. S. Dryzek, J. Mansbridge, & M. E. Warren (Eds.), The Oxford handbook of deliberative democracy (pp. 329–347). New York: Oxford University Press. DOI:  http://doi.org/10.1093/oxfordhb/9780198747369.013.31

102 Niessen, C. (2019). When citizen deliberation enters real politics: How politicians and stakeholders envision the place of a deliberative mini-public in political decision-making. Policy Sciences, 52, 481–503. DOI:  http://doi.org/10.1007/s11077-018-09346-8

103 Nix, J., & Pickett, J. T. (2017). Third-person perceptions, hostile media effects, and policing: Developing a theoretical framework for assessing the Ferguson effect. Journal of Criminal Justice, 51, 24–33. DOI:  http://doi.org/10.1016/j.jcrimjus.2017.05.016

104 O’Flynn, I., & Sood, G. (2014). What would Dahl say? An appraisal of the democratic credentials of deliberative polls and other mini-publics. In K. Grönlund, A. Bächtiger, & M. Setälä (Eds.), Deliberative mini-publics: Involving citizens in the democratic process (pp. 41–58). Colchester: ECPR Press.

105 Olsen, E. D., & Trenz, H.-J. (2014). From citizens’ deliberation to popular will formation? Generating democratic legitimacy in transnational deliberative polling. Political Studies, 62, 117–133. DOI:  http://doi.org/10.1111/1467-9248.12021

106 Parkinson, J., & Mansbridge, J. (Eds.) (2012). Deliberative systems: Deliberative democracy at the large scale. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9781139178914

107 Perloff, R. M. (1989). Ego-involvement and the third person effect of televised news coverage. Communication Research, 16, 236–262. DOI:  http://doi.org/10.1177/009365089016002004

108 Perloff, R. M. (2015). A three-decade retrospective on the hostile media effect. Mass Communication and Society, 18, 701–729. DOI:  http://doi.org/10.1080/15205436.2015.1051234

109 Pew Research Center. (2017). The partisan divide on political values grows even wider: Sharp shifts among Democrats on aid to needy, race, immigration. Pew Research Center. Retrieved from https://www.people-press.org/2017/10/05/the-partisan-divide-on-political-values-grows-even-wider/

110 Pfau, M., Kenski, H. C., Nitz, M., & Sorenson, J. (1990). Efficacy of inoculation strategies in promoting resistance to political attack messages: Application to direct mail. Communication Monographs, 57, 25–43. DOI:  http://doi.org/10.1080/03637759009376183

111 Pornpitakpan, C. (2004). The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Social Psychology, 34, 243–281. DOI:  http://doi.org/10.1111/j.1559-1816.2004.tb02547.x

112 Rahman, B. H. (2014). Conditional influence of media: Media credibility and opinion formation. Journal of Political Studies, 21, 299–314.

113 Rainie, L., Keeter, S., & Perrin, A. (2019, July). Trust and distrust in America. Pew Research Center. Retrieved from https://www.people-press.org/2019/07/22/trust-and-distrust-in-america/

114 Reid, S. A. (2012). A self-categorization explanation for the hostile media effect. Journal of Communication, 62, 381–399. DOI:  http://doi.org/10.1111/j.1460-2466.2012.01647.x

115 Richardson, J. D., Huddy, W. P., & Morgan, S. M. (2008). The hostile media effect, biased assimilation, and perceptions of a presidential debate. Journal of Applied Social Psychology, 38, 1255–1270. DOI:  http://doi.org/10.1111/j.1559-1816.2008.00347.x

116 Rinke, E. M., Wessler, H., Löb, C., & Weinmann, C. (2013). Deliberative qualities of generic news frames: Assessing the democratic value of strategic game and contestation framing in election campaign coverage. Political Communication, 30, 474–494. DOI:  http://doi.org/10.1080/10584609.2012.737432

117 Ryan, M., & Smith, G. (2014). Defining mini-publics. In K. Grönlund, A. Bächtiger, & M. Setälä (Eds.), Deliberative mini-publics: Involving citizens in the democratic process (pp. 9–26). Colchester: ECPR Press.

118 Schmitt, K. M., Gunther, A. C., & Liebhart, J. L. (2004). Why partisans see mass media as biased. Communication Research, 31, 623–641. DOI:  http://doi.org/10.1177/0093650204269390

119 Setälä, M., & Smith, G. (2018). Mini-publics and deliberative democracy. In A. Bächtiger, J. S. Dryzek, J. Mansbridge, & M. E. Warren (Eds.), The Oxford handbook of deliberative democracy (pp. 300–314). New York: Oxford University Press.

120 Shin, J., & Thorson, K. (2017). Partisan selective sharing: The biased diffusion of fact-checking messages on social media. Journal of Communication, 67, 233–255. DOI:  http://doi.org/10.1111/jcom.12284

121 Smith, A. (2009, April). The internet’s role in campaign 2008 (Pew Internet: Pew Internet & American Life Project). Pew Research Center. Retrieved from https://www.pewresearch.org/internet/2009/04/15/the-internet-as-a-source-of-political-news-and-information/

122 Smith, G. (2012). Deliberative democracy and mini-publics. In B. Geissel & K. Newton (Eds.), Evaluating democratic innovations: Curing the democratic malaise? (pp. 90–111). London: Routledge.

123 Swift, A. (2016, September). Americans’ trust in mass media sinks to new low. Gallup. Retrieved from https://news.gallup.com/poll/195542/americans-trust-mass-media-sinks-new-low.aspx

124 Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50, 755–769. DOI:  http://doi.org/10.1111/j.1540-5907.2006.00214.x

125 Tsfati, Y. (2007). Hostile media perceptions, presumed media influence, and minority alienation: The case of Arabs in Israel. Journal of Communication, 57, 632–651. DOI:  http://doi.org/10.1111/j.1460-2466.2007.00361.x

126 Tsfati, Y., & Cohen, J. (2005). Democratic consequences of hostile media perceptions: The case of Gaza settlers. Harvard International Journal of Press/Politics, 10, 28–51. DOI:  http://doi.org/10.1177/1081180X05280776

127 Tsfati, Y., & Cohen, J. (2013). Perceptions of media and media effects: The third person effect, trust in media and hostile media perceptions. In A. N. Valdivia & E. Scharrer (Eds.), The international encyclopedia of media studies: Media effects/media psychology (1st ed., pp. 1–19). Blackwell Publishing Ltd. DOI:  http://doi.org/10.1002/9781444361506.wbiems995

128 Vallone, R. P., Ross, L., & Lepper, M. R. (1985). The hostile media phenomenon: Biased perception and perceptions of media bias in coverage of the Beirut massacre. Journal of Personality and Social Psychology, 49, 577–585. DOI:  http://doi.org/10.1037/0022-3514.49.3.577

129 van der Wurff, R., De Swert, K., & Lecheler, S. (2018). News quality and public opinion: The impact of deliberative quality of news media on citizens’ argument repertoire. International Journal of Public Opinion Research, 30, 233–256. DOI:  http://doi.org/10.1093/ijpor/edw024

130 Warren, M. E., & Gastil, J. (2015). Can deliberative minipublics address the cognitive challenges of democratic citizenship? The Journal of Politics, 77, 562–574. DOI:  http://doi.org/10.1086/680078

131 Warren, M. E., & Pearse, H. (2008). Designing deliberative democracy: The British Columbia Citizens’ Assembly. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511491177

132 Wathen, C. N., & Burkell, J. (2002). Believe it or not: Factors influencing credibility on the Web. Journal of the American Society for Information Science and Technology, 53, 134–144. DOI:  http://doi.org/10.1002/asi.10016

133 Wessler, H. (2008a). Deliberativeness in political communication. In W. Donsbach (Ed.), The international encyclopedia of communication (1st ed., pp. 1–6). Malden, MA: John Wiley & Sons, Ltd. DOI:  http://doi.org/10.1002/9781405186407.wbiecd011

134 Wessler, H. (2008b). Investigating deliberativeness comparatively. Political Communication, 25, 1–22. DOI:  http://doi.org/10.1080/10584600701807752

135 Wessler, H., & Rinke, E. M. (2014). Deliberative performance of television news in three types of democracy: Insights from the United States, Germany, and Russia. Journal of Communication, 64, 827–851. DOI:  http://doi.org/10.1111/jcom.12115

136 White, S. K. (2010). Fullness and dearth: Depth experience and democratic life. American Political Science Review, 104, 800–816. DOI:  http://doi.org/10.1017/S0003055410000365

137 Wimmer, R. D., & Dominick, J. R. (2013). Mass media research: An introduction (10th ed.). Australia: Wadsworth Cengage Learning.

138 Yankelovich, D. (1991). Coming to public judgment: Making democracy work in a complex world. Syracuse, NY: Syracuse University Press.

139 Yun, G. W., Park, S.-Y., & Lee, S. (2016). Inside the spiral: Hostile media, minority perception, and willingness to speak out on a Weblog. Computers in Human Behavior, 62, 236–243. DOI:  http://doi.org/10.1016/j.chb.2016.03.086

140 Yun, G. W., Park, S.-Y., Lee, S., & Flynn, M. A. (2018). Hostile media or hostile source? Bias perception of shared news. Social Science Computer Review, 36, 21–35. DOI:  http://doi.org/10.1177/0894439316684481