Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Statement of the Problem
The current study proposes to replicate the findings of Clayton et al. (2019) wherein news articles are labeled with a tag stating “Disputed” or “Rated false.” Because the spread of fake news has become such a prominent issue on social media, it is necessary to look at the impact a label can have on the perceived validity of an article. The hypotheses predict that articles with no label will cause more perceived validity for true and false articles, a strong label such as “Rated False” will cause skepticism whether the article is true or false, and a more neutral label such as “Disputed” will cause mild doubt.
Justification and Significance
Prevalence of Fake News
Prior to determining the most efficient way to combat fake news, it is important to understand how often the issue occurs on social media. Content on social media is predominantly crowd-sourced so it does not adhere to the same process of fact-checking as other sources of information (Jang & Kim, 2018). Journalists once acted as the main gatekeepers but news consumption has changed dramatically as a result of the reliance individuals have on non-credible sources (Jang et al., 2018). It is estimated that Americans, on average, saw one to three false stories from known fake news publishers the month before the 2016 election (Lazer et al., 2018). Advances in technology have given people constant access to political information and many arguments have been made that partisanship and favorability to a political party play a major role in the kind of fake news articles that are spread. 65% of users on Facebook and Twitter report that most of the content on each site is politically related (Vendemia, Bond, & DeAndrea, 2019). These technological advances do not exclude abuses of said technology and that now requires websites to be even more vigilant about their security. Recently there have been reports of a noticeable sophistication associated with cyborg accounts and they are easily hidden within social networks (Grinberg et al., 2019). Many sites work hard to identify and eliminate bot accounts, but partially automated cyborgs have been able to continually infiltrate the system (Grinberg et al., 2019). Despite the fact that only 5% of exposures to political URLs during the 2016 election season were from fake sources, those sources accounted for more than 50% of exposures (Grinberg et al., 2019).
Fake News Impact
Concern has grown that fake news causes confusion in regards to fact-checking and there is an urgent need to combat the issue (Jang & Kim, 2018). Overall comprehension is compromised by doubt, confusion, and reliance when people are regularly exposed to false information (Rapp & Salovich, 2018). The constant repetition of fake news headlines has shown to increase their perceived accuracy (Pennycook, Cannon & Rand, 2018). Additionally, constant exposure to media has been shown to result in a range of attitudes such as cynicism and apathy, as well as an increase in extremist viewpoints (Lazer et al., 2018). Cynicism and apathy will compromise an individual’s desire to properly analyze sources and the articles they are presented with on social media. People will begin to think that real news is fake and the overall accuracy of true articles will be questioned if people are not prepared for the reality of how often fake news occurs (Van Duyn & Collier, 2019). The current study aims to help find a way to eliminate false articles on social media without compromising the integrity of true articles, allowing the general public to be properly informed. Potential labeling solutions were created for when people are presented with fake news by adding tags stating the article was “Rated False” which lowered its perceived accuracy while a tag that was relatively vague, such as “Disputed,” was not as effective (Clayton et al., 2019).
Why Fake News Works
Another important aspect in the process of analyzing fake news and its prominence is looking at who is deceived by fake news and why. Conspiracy theorists are prone to extreme views because they believe the world is occupied by people who act exclusively in their own best interest without regard for anyone else (Anthony & Moulding, 2018). If an individual regards the media as hostile, and potentially in on the very conspiracy theories they believe, they will not trust the media and will see the information reported as part of the strategy of deception (Van Duyn & Collier, 2019). Additionally, analytic thinking is a heavily contributing factor to the reader’s perceived validity of an article. Those who are willing to think analytically about problems are less likely to believe fake news is accurate (Pennycook & Rand, 2018). Many research studies tend to focus on college undergraduates because it indicates a cohesive measure of education level and individual differences in cognitive ability are generally overlooked (Frederick, 2005). However, analyzing cognitive ability is important because a wide array of education levels are represented on social media and these findings indicate that belief in fake news is affected by a person’s effort to analyze it further rather than accept it at face-value. The current study seeks to assess the cognitive effort of the participant to determine just how much of a role that factor plays in the spread of fake news.
Combating Fake News
While it may appear that the simple solution to the growth of fake news is to ignore it, there is evidence that any exposure to misinformation can cause people to begin doubting ideas they should know are completely true (Rapp & Salovich, 2018.) Issuing a correction after false information has been spread is ineffective because a correction that challenges a person’s opinion does not decrease their belief in the information (Lewandowsky, Ecker, & Cook, 2017). Using crowdsourced trust ratings to gather information about the reliability of a media outlet is a promising approach, as it may help social media algorithm rankings that determine which articles people see (Pennycook & Rand, 2019). Prior studies have focused on looking at the response participants have to the validity of political topics. However, articles with a political or gendered issue are susceptible to bias so articles with a less polarizing viewpoint will yield more conclusive results (Vendemia, Bond, & DeAndrea, 2019). The current study uses tags associated with an articles that is part of a more general topic in order to determine the effectiveness of the actual label.
The study population will consist of adults 18 and older. Ads for the study will be posted on Facebook and the survey to gather participants will be hosted by Survey Monkey. Demographics will be collected in order to create groups that are representative of members of all political parties with a wide range of ages, gender identities, races, and ethnicities represented. Participants in the study will be selected for each group via quota sampling to ensure that no political party is overrepresented.
Demographics. In this study, age, gender, level of school, highest degree (including major), ethnicity, and chosen political affiliation will be collected.
The experiment will be conducted in a laboratory setting and individuals will participate in the study for one hour. They will complete a Cognitive Reflection Test on provided computers and they will also receive the headline and accompanying on said computers.
After receiving the consent of the participant, each will be given a three-question Cognitive Reflection Test as a means of measuring their critical thinking skills. The test will be taken prior to the participant receiving the article with the headline and pre-determined tag. Each group will consist of thirty participants for a total of ninety participants. The control group will receive an article with no tag next to the headline, one group will receive an article with a “Disputed” tag next to the headline, and the third group will receive an article with a “Rated False” tag next to the headline. In an effort to create a more realistic setting, they will receive the headline in a format similar to the Facebook algorithm. To avoid potential bias, each participant will be given an article with a headline stating that eating before exercising is more beneficial than eating after exercising. The participants will then be asked to rate the accuracy of the article using a four-point Likert scale from “Not at all accurate” (1) to “Very accurate” (4).
Scales. For this particular study, the definition of fake news will be provided by Lazer et al., (2018) which states that it is “fabricated information that mimics news media content in form but not in organizational process or intent” (p. 2). The ratings associated with the headlines, “Rated False,” “Disputed,” and no tag, were previously used by Clayton et al., in the replicated experiment (2019). The Cognitive Reflection Test given to participants is the one provided by Frederick and is used to assess a participant’s ability to override the intuitive answer when given a seemingly simple question (2005). The four-point Likert scale ranging from “Not at all accurate to “Very accurate” used to assess the accuracy of the headline also comes from the study conducted by Clayton et al., (2019).
In order to assess the perceptions of article headlines, a Factorial Design or two-way analysis of variance (ANOVA) between the “Disputed,” “Rated False,” and no tag groups will be conducted to analyze differences compared to the control group, as well as the differences in perceived validity for those who score highly and poorly on the Cognitive Reflection Test. A t-test for independent means will be conducted to analyze the specific differences between the “Disputed” and “Rated False” groups and a t-test for independent means will be used to analyze the specific differences between those who scored highly and poorly on the Cognitive Reflection Test.
The prediction is that the “Rated False” tag will cause the participants to rate the headline as “Not at all accurate” slightly more often than the “Disputed” tag and far more often than the control of no tag. It is also expected that the “Disputed” tag will cause participants to rate the headlines as “Not at all accurate” slightly more often than the control of no tag. It is expected that the control group of no tag will have less reliability in regards to the accuracy ratings. The prediction in regards to the Cognitive Reflection Test is that those who score well on the test will more efficiently analyze the article regardless of the headline tag. Those who score poorly on the test will be far less likely to analyze the article regardless of the headline tag.
- Anthony, A., & Moulding, R. (2018). Breaking the news: Belief in fake news and conspiracist beliefs. Australian Journal of Psychology, 1-9.
- Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., …Nyhan, B. (2019). Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behavior, 1-23.
- Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19, 25-42.
- Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. presidential election. Science, 363, 374-378.
- Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in Human Behavior, 80, 295-302.
- Jang, S.M., Geng, T., Li, J., Xia, R., Huang, C., & Kim, H. (2018). A computational approach for examining the roots and spreading patterns of fake news: Evolution tree analysis. Computers in Human Behavior, 84, 103-113.
- Lazer, D., Baum, M., Benkler, Y., Berinsky, A., Greenhill, K., Menczer, F., …Zittrain, Jonathan. (2018). The science of fake news. Science, 359, 2-4.
- Lewandowsky, S., Ecker, U., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6, 353-369.
- Pennycook, G., & Rand, D. G. (2018). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. International Journal of Cognitive Science, 1-12.
- Pennycook, G., & Rand, D, G. (2019). Fighting misinformation on social media using crowdsourced judgments of news source quality. PNAS, 7, 2521-2526.
- Pennycook, G., Cannon, T. D., & Rand, D, G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147, 1865-1880.
- Rapp, D. N. & Salovich, N. A. (2018). Can’t we just disregard fake news? The consequences of exposure to inaccurate information. Policy Insights from the Behavioral and Brain Sciences, 5, 232-239.
- Van Duyn, E., & Collier, J. (2019) Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Communication and Society, 22, 29-48.
- Vendemia, M. A., Bond, R. M., & DeAndrea, D. C. (2019). The strategic presentation of user comments affects how political messages are evaluated on social media sites: evidence for robust effects across party lines. Computers in Human Behavior, 91, 279-289.
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
Cite This Work
To export a reference to this article please select a referencing style below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: