Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Cyber propaganda and misleading information have become increasingly prominent in recent years, most notably since the 2016 US election campaign throughout which “fake news” became a mainstream term. Misinformation has become available through popular global social networking sites such as Instagram, Facebook, Twitter, LinkedIn, and YouTube. Individuals using these sites are often exposed to a vast amount of information and often struggle to distinguish the facts from fiction. Fake news’ popularity has influenced the debate regarding the impact social networking sites have on our society. Academics who have performed studies and done significant research on the subject have found that social media has enabled users to reach information faster and become connected on a global scale better than ever (Hancer, 2017; Newman, Levy, Fletcher & Nielsen, 2016). However, despite the truth in their research it fails to consider the use of quick access to widespread information for cyber propaganda and how social media is instrumental to the free press. This paper looks to determine the correlation between social media and fake news and argues the negative impact it has on society. Through the work of various scholars and credible research studies, I claim that social media networks support the creation and widespread dissemination of false reports for economical value while exploiting the individuals who lack the education to combat fake news. To gain a better insight on the issue, I suggest that we look at how the lack of education, biases and echo chambers, and business models has enabled such information flow and why individuals are increasingly vulnerable to fake news.
Keywords: social media, fake news, propaganda, information, cyber
Fake news, misinformation campaigns, propaganda and fabricated stories all aim to reach as many individuals as possible; in other words, get as many shares, likes and views across all media platforms. There appears to be two main motivations behind trying to get fake content to a large viewing base: pecuniary and ideological (Allcott & Gentzkow 2017). Pecuniary or monetary purposes refers to publishing viral articles to draw a large viewing base to get original site visits causing advertising gains. Ideological motivation considers fake news campaigners seeking to promote candidates or opinions which they support. The worst part, whether for pecuniary or ideological purposes, fake news is working. In a study done by MIT, analysis found that Twitter from its inception in 2006 to 2017 had approximately 126 000 rumor cascades spread by about 3 million people over 4.5 million times (Vosoughi & Aral 2018). This is the reach of just one social media platform that is instrumental to the dissemination of misinformation. The question becomes, why are people across the globe increasingly vulnerable to misinformation campaigns? In this paper we investigate the lack of education, biases and echo chambers, and conclude with business models as a means to what has fueled the influence and reach of misinformation in the digital age.
Lack of Education
An old saying that goes, “Give a man a fish and you’ll feed him for a day. Teach a man to fish, and you’ve fed him for a lifetime.” is often related to the importance of education. Although different versions of that quote are made from different philosophers and public figures, they all enforce the same lesson: to teach individuals how to reach solutions rather than providing them with one. That knowledge enables one with the tools to act or react considering any situation that arises. Though knowledge is no longer taught solely by teachers, textbooks, or your parents but has rather extended to the media outlets and the world wide web. When looking for answers, most of us today would refer to Google as our source of information. Often relying on the links provided within the first page of Google to be both reliable and truthful.
To apply the same set of skills used for print reading to media literacy is naïve yet popular. Alike when reading a textbook in primary school or high school, individuals were taught that information provided in an organized and professional manner, with advanced language, coupled with diagrams and statistics must be academically correct. In an article done by Stanford University professor, Sam Wineburg, he asked a student to identify the difference between a bias source and one that wasn’t; the student responded, “They seemed equally reliable to me… They are both from academies or institutions that deal with this stuff every day” (Wineburg & McGrew 2018). Readers tend to validate the source of information by the appearance of the website, application or views an article may get. The implication being that people are reading the headlines not the story and sharing or clicking a post without thinking or evaluating the article beforehand.
Wineburg and McGrew recalled a research study done at Stanford University involving 7,804 responses from students in various educational levels. They found that the students often struggled to distinguish advertisement from news story, accepted bias or falsified statistics, and accepted .org domain names as though globally recognized and credited websites regardless of the content provided (2018).
Aside from fact and fiction-based information there exists companies or individuals that may attempt to discredit already established information through a strategy built on “public doubt”. The strategy was first seen being used by the tobacco industry to confuse the public about the dangers of smoking (Kakutani, p 74-75, 2018). The strategy would revolve around utilizing so-called professionals to disprove any science-backed research or to inquire that more research needs to be done. A strategy that was similarly seen applied by Trump’s campaign during the 2016 election to defend his policies.
Misinformation campaigns become so successful because they can exploit the individuals who are unable to distinguish the fact from fiction. There exists a lack of literacy education to enable the reader with the tools to recognize the propaganda, political agendas, or any other reason to skew information. Educational tools such as properly identifying reliable sources or credited authors through preliminary research done prior to reading. Academics tend to use peer-reviewed articles or scholarly journals as their sources to vet credited authors. However, in a daily setting this is difficult for users as it usually requires organizational affiliation (often schools or academic centers) to access the sources. Without such a database, college students would often skip the “About Us” pages, where they were most likely to find the credential information (Bulger & Davison 2018). Since it is often time consuming to discover an author’s credentials and to cross-check information with other academic sources readers fall exposed to misinformation through complacency of what is readily available.
“Schools today often fail to teach students what constitutes a credential, where to find evidence of credentials, and why it’s worth the time to do so” (Burkhardt 2017). With the lack of such education, individuals rely on the same skill sets that were used when reading textbooks; what is provided is what is true. The lack of awareness of how easy it is to create a fake website and the lack of importance emphasized on following up on citations and links increases the vulnerability people have globally to fall into and spread misinformation.
Bias & Echo Chambers
Bias is the action of supporting or opposing one thing, person or group compared with another, often by allowing personal opinions to influence your judgement. Research done by Indiana University identified three types of bias; brain, machine and society, which increase individual’s exposure to misinformation (Ciampaglia & Menczer 2018). Brain bias or cognitive biases refers to the way in which information encountered daily is processed. When too much information is being taken at a time, the brain may receive an information overload reducing attention span. Thus, regardless of the quality of information being received (low or high) it is often still shared and read regardless of the individual’s material standards. If information overload occurs, there exists a brain bias in which people become affected by the emotional connotations of headlines rather than the actual story inside with no regard to who the author is. The second grouping of bias, bias in the machine, refers to the algorithms used to determine what users will see online. Anything individuals do online will be recorded and used to redirect the most relevant content, making a more personalized browser of what you want and like to see. Companies wants users to click more links and view more pages as it provides platforms and other companies to collect information about users and to feed advertisements personalized to user interests (Carr 2008). In doing so, confirmation bias can be exploited as filters will isolate people to certain information feeds often discarding diverse perspectives. With such filters in place, disinformation campaigners can push their agenda by tailoring messages to reach the users who are likely already inclined to believe them based on their browsing history. Furthermore, machine algorithms on the web help enforce popularity bias in which information is provided based on shares or views irrespective of its quality or credibility.
Society bias regards the selection of their friends to be people who typically share the same opinions or views. Society bias is interrelated to echo chambers, where individual beliefs are amplified or reinforced by communication and reputation inside a closed system. In a research article done by the University of Amsterdam they suggest echo chambers act metaphorically as fuel to the flame of misinformation campaigns (Törnberg 2018). The research indicates that general structural effects of echo chambers exist that contribute to the viral spread of misinformation. That individuals who narrow their information feed to that of shared opinions and views entices them to share and like an article more often increasing its popularity globally as it filters through various echo chambers.
With such biases existent and echo chambers permitted in social media platforms, misinformation campaigns can abuse algorithms and exploit social media structures to push personal agendas to go viral and appear credible. Individuals are susceptible to being locked in their own preferences having less exposure to other perspectives and opinions. In other words, a band-wagon effect takes place where information is first agreed upon by one individual and shared, eventually leading to multiple others within the same echo chamber agreeing with their previous neighbour in the wagon.
“A subversive industry of fake news has been arising as an independent business opportunity in the news market” (Figueira & Oliveira 2017). Misinformation campaigns focus on pushing for high interactions on social networks, generating web traffic to fake news pages, gain profit through advertising or to damage someone’s image and reputation. A malicious business focused on exploiting any opportunities to push personal agendas regardless of social harm. Drawing on the work of Figueira and Oliveira’s research, this section explores the various business models used by misinformation campaigns to help achieve their goals.
Companies have developed in the news market who focus solely on attaining a vast number of URLs devoted to the dissemination of fake news on social networks as well as enabling users to create their own stories for profit. A Belgium company, Media Vibes SNC uses this approach to allow campaigners or companies to push their advertisements and messages through the company’s social media outlets regardless of information quality (Figueira & Oliveira 2017). Furthermore, the same company adapts a user-generated fake news concept, by provided users with the tools to develop their own stories and spread them on their own social networks. This entices users to create jokes, sarcasm, provocations etc. and adapt the “do-it-yourself” mentality to encourage application interaction.
Another fake news business model is companies utilizing URLs very similar to popular or reputable news stations and media establishments to publish their articles. Companies attempt to mimic existent news outlets in hopes to fool lousy users on the internet. An individual may use prior knowledge from an external environment to verify the legitimacy of an article seen on screen without looking to see if it is the actual representation of that company.
The last business application seen of fake news is the use of bots. In a documentary done by People and Power called “Disinformation and Democracy” they inform viewers of the capability’s bots have on social media platforms (Jazeera 2018). There are programs in place that allow users to generate bots in form of fake profiles designated to circulate and amplify information. The documentary looks at one computer application called UBot Studio and how its ease of use, cheap cost, and rapid dissemination features can be used maliciously. Companies use bots to promote services such as Facebook likes, Twitter followers, YouTube Subscribers etc. for cheap costs.
The news market has become abused by various individual or company agendas. Misinformation campaigners use social media platforms to push political messages, gain engagement and advertising profit, or to damage another’s reputation. Doing so using bots, mimicking established news outlets, or plaguing the internet with sites dedicated to fake news.
Misinformation may have existed long ago in history, but with technology it has certainly been disseminated and produced stronger than ever. The awareness and increase in the subject may have been sparked by the term “Fake News” coined by Donald Trump during the 2016 presidential election, but propaganda and fabricated stories had been existent already. I believe that the lack of education in media literacy, the existent biases and echo chambers by individuals, and the business market created around fake news has made people across the globe increasingly vulnerable to misinformation. The lack of instruction on the importance of credibility, how to find it, and the awareness of fake news around us has allowed individuals to be complacent with what they are receiving and using as information. Being trapped in personal echo chambers supported by media platforms and algorithmic biases has narrowed the information feed of individuals to be susceptible to skewed information that adhere to their personal views and opinions. And lastly, the intention of profitable gain either individually or organizationally whether political or financial by dissemination fake news in any possible opportunity. Whether it be utilizing bots, impersonating reputable news sources or infesting the internet with as much links to misinformation as possible to attain their goals. It is increasingly difficult to contain and eliminate fake news outlets but contrarily becoming increasingly easy to partake and create misinformation on a global scale.
- Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236.https://doiorg.eztest.ocls.ca/10.1257/jep.31.2.211
- Bulger, M., & Davison, P. (2018). The promises, challenges and futures of media literacy. Journal of Media Literacy Education, 10(1), 1–21. http://ra.ocls.ca/ra/login.aspx?inst=georgian&url=http://search.ebscohost.com.eztest.ocls.ca/login.aspx?direct=true&db=ehh&AN=130245864&site=eds-live&scope=site
- Burkhardt, J. M. (2017). Combating fake news in the digital age. Library Technology Reports, 53(8), 1–33. http://dx.doi.org/10.5860/ltr.53n8
- Carr, N. (2008), Is google making us stupid? 107: 89-94. doi:10.1111/j.1744-7984.2008.00172.x
- Ciampaglia, G. L., Menczer, F. (2018). Misinformation and biases infect social media, both intentionally and accidentally. Retrieved from https://theconversation.com/misinformation-and-biases-infect-social-media-both-intentionally-and-accidentally-97148
- Figueira, Á, & Oliveira, L. (2017). The current state of fake news: Challenges and opportunities. Retrieved from https://www.sciencedirect.com/science/article/pii/S1877050917323086
- Hancer, E. (2017). The role of social media as a negotiation sphere for public good: The case north Cyprus. Revista de Cercetare Si Interventie Sociala, 59, 86–103. http://ra.ocls.ca/ra/login.aspx?inst=georgian&url=http://search.ebscohost.com.eztest.ocls.ca/login.aspx?direct=true&db=sih&AN=127561332&site=eds-live&scope=site
- Jazeera, A. (2018). Disinformation and democracy. Retrieved from https://www.youtube.com/watch?v=cuaRz7BOm1A
- Kakutani, M. (2018) The death of truth: Notes on falsehood in the age of trump. HarperCollins Publishers Limited.
- Newman, N., Levy, D., Fletcher, R. & Nielsen. (2016). Reuters institute digital news report 2016: Tracking the future of news. Retrieved from http://www.digitalnewsreport.org/survey/2016/
- Törnberg, P. (2018). Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLoS ONE, 13(9), 1–21. https://doiorg.eztest.ocls.ca/10.1371/journal.pone.0203958
- Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, (6380), 1146. https://doi-org.eztest.ocls.ca/10.1126/science.aap9559
- Wineburg, S., & McGrew, S. (2018). Why students can’t google their way to the truth. Retrieved from https://www.edweek.org/ew/articles/2016/11/02/why-students-cant-google-their-way-to.html
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
Cite This Work
To export a reference to this article please select a referencing style below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: