Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.
At the beginning of the digital revolution in the 1950’s and 60’s, as technology transformed from analog and mechanical to digital, computational power was the limiter of progress. It was understood that improvements in hardware would reap improvements in performance and efficiency. As the nascent computer industry grew, so did the complexity and density of processors – Moore’s Law (Moore, 1965) projected a yearly doubling in the number of components per integrated circuit. And this projection proved to be roughly accurate for decades following as illustrated in figure 1.
As processing power increased, more and more information was collected and stored and the ability to comprehend this data in a meaningful way began to be viewed as another limiter to progress. As our ability to capture and store data increased, questions arose around our ability to comprehend and utilise it. By 1990, researchers were starting to acknowledge that this deluge of data was becoming problematic, from the viewpoints of both storage and comprehension. “The imperative [for scientists] to save all the bits forces us into an impossible situation: The rate and volume of information flow overwhelm our networks, storage devices and retrieval systems, as well as the human capacity for comprehension…” (Denning, 1990) . The size and complexity of the data being stored was beginning to outstrip the capability of traditional analytical methods.
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
In their retrospective study of the evolution of storage systems, Morris and Truskowski noted that an important economic tipping point occurred in the mid-1990’s. By 1996, digital storage had become more cost effective for storing data than paper (Morris & Truskowski, 2003). And the price of data storage continued to fall. As data storage became ever more efficient and cost-effective, the quantities being stored grew exponentially, projected to be over 40ZB by 2020, as illustrated in figure 2.
An illustration of the scale of this growth is that by 2014, mankind was producing the same amount of data every two days as was produced from the dawn of civilisation to 2003. (Kitchin, 2014).
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
In 2005, the term Big Data was coined by Roger Mougalas to describe sets of data that are so large as to be almost impossible to manage and process using traditional business intelligence tools. (van Rijmenam, 2016) Technology companies, financial institutions and governments across the world began to see Big Data as the next great challenge and opportunity. The idea of being able to comprehend and utilise the flood of data now being captured and stored was recognised as being of great technological and economic value. At the World Economic Forum in Davos in January 2012, data was declared “a new class of economic asset, like currency or gold”. (Lohr, 2012). The time of Big Data had come.
In common with other new concepts, the term Big Data quickly came to mean whatever the speaker wanted it to mean. It became an industry buzzword and a new field to report on and champion. Because it impacts diverse fields such as medicine, sociology, economics, computer science, radiology, agriculture and sports science, the term itself was in danger of becoming ambiguous.
To formally define the term, De Mauro, Greco and Grimaldi surveyed contemporary definitions of the term Big Data and aggregated over 1,500 conference papers and journals in 2014 that used the term in either their heading or abstract, to synthesize a consensual definition: “Big Data represents the Information assets characterised by such a High Volume, Velocity and Variety to require specific Technology and Analytical Methods for its transformation into Value”. (De Mauro, et al., 2015)
INTRODUTORY SENTENCE TYING BIG DATA TO MACHINE LEARNING#####################
The fields of predictive analytics and data mining have long been concerned with finding and describing structural patterns in data, which can then be used to explain the data, influence decisions or predict behaviour. When faced with a very large dataset, the automation of this process is a necessary. Machine learning can be defined as “an automated process that extracts patterns from data.” (Kelleher, et al., 2015).
It can be thought of as the implementation of statistical models and algorithms to perform a task, without specific instructions.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: