Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Crowdsourcing is the practice of obtaining information from a large number of people for a task or project. The data is collected typically via internet and a person can be sometimes paid for their input. This report will focus on different trust management issues in crowdsourcing, different challenges and opportunities to overcome those challenges of crowdsourcing. While collecting HITs (Human Intelligence Tasks), there are some information which are spam by user or give the wrong input in order to just complete the HIT. This degrades the quality of the information which will be used for the project. Therefore, if quality decreases then there will be direct impact on the project as well. On the same side there is also the need of quantity. Traditional trust management techniques depends on the Service Provider(s) (SP) to get the highest possible accuracy on the data. Combining different researches like creating a stimulation test-bed based on the characteristics of Amazon Mechanical Turk (AMT), this report tries to overcome such challenges as well as tries to find new opportunities in crowdsourcing trust management.
As of late, the theory of utilizing computers is moving. Rather than endeavouring to push for encourage computerization with always advanced processing systems to handle issues that people discover simple to do while PCs battle with (e.g., translating a video, finishing a study), another type of figuring platforms that outsource them to a different gathering of individuals have developed. This classification of figuring platforms is known as crowdsourcing systems. A portion of the surely understood cases incorporate Amazon’s Mechanical Turk (AMT), 99designs, and Mob4hire, and so forth.
Crowdsourcing systems are another type of e-commerce business systems. Numerous crowdsourcing systems work in a comparative mode as casual online job agencies. Each of them has an enlisted client base comprising of requesters who have projects that should be outsourced, and specialists who are willing to offer their opportunity and push to finish these undertakings in trade for money related gain. A requester separates a project into little Human Intelligence Tasks (HITs) utilizing restrictive devices given by the crowdsourcing systems, at that point distributes for specialists take up. A money related reward is related to each HIT.
In such an open system, specialists may have diverse levels of unwavering quality or even act deliberately when taking a shot at HITs because of ulterior intentions. Notoriety administration has been appeared to be a promising way to deal with address this issue. Late works contemplated how to evoke reasonable assessments on the nature of HIT results and how to limit blunders when refreshing workers’ notoriety. Be that as it may, there is one major contrast between crowdsourcing systems and system attributes under which existing trust administration models are proposed: existing trust models accept no impediment on the limit of a trustee, while in crowdsourcing systems, trustees are people who are characteristically limit compelled. This mismatch has noteworthy ramifications on the applicability of existing reputation models in crowdsourcing. The success of a HIT relies upon both the quality and the opportuneness of its result. Awkward connection choices made by existing reputation models have been found to result in finished use of few profoundly respectable workers and diminished the business throughput of a crowdsourcing system, subsequently making reputation management unattractive.
Crowdsourcing was invented by Jeff Howe in 2006. Since then the field of crowdsourcing has grown exponentially. If a task requires certain skillset, companies have to choose between preparing a department or hiring outsiders. Still specialists can be expensive and off shore outsourcing has the drawback of communication and/or culture. Therefore crowdsourcing is a popular option. (Howe, 2006). There are a few challenges identified with crowdsourcing, particularly with regards to quality. Amazon’s Mechanical Turk system expects individuals to do tasks which includes identifying photographs, writing product information or transcribing audio. These are things where people can perform better to PCs, however then again, as portrayed in an article in Wired Magazine, the workers who contribute in these tasks may use different ways in order to complete the assignments faster and get paid. This fact can affect quality, making the end result not as well as it was expected (Howe, 2006). Large companies like Amazon and AOL have moved to crowdsourcing, resulting in a success of crowdsourcing companies with the increment of 74% in their revenue bet ween 2010 to 2011 (Silverman, 2012).
The difference in expenses can be substantial as confirmed by the American software company iConclude which in 2006 substitute outsourced work and its expense of 2000 USD per unit with work done by individuals found through Amazon.com’s crowdsourcing administration, who completed the work for 5 USD per unit. (Howe, 2006)
The co-founder of the entrepreneur help community Whinot named Kyle Hawke and previous manager and consultant at the IT-organization Accenture, asks on the site dailycrowdsource.com if the clients of crowdsourcing are mistaking quantity for quality. Since taking an interest in one of these little tasks does not require particularly exertion and the rewards are not relying upon the quality of the outcomes, there will be number of solutions gather for the same task. Also, it takes time and resource to go through these task and to make sure the quality of the solution. (Mao, et al., 2016)
Things like cost benefits, the distinctive sort of development process and the uncertainty of the quality that comes with the task solution gives motivation of how the level of quality is managed by going through completed crowdsourcing process. Moreover, there are some limitations to what organizations can do to look after or deal with the quality when going through the development of the final solution, since this is finished by individuals who are also known as the workers are the part of the crowd.
As the traditional trust management approaches are in for the service provider or requester in order to get the quality solutions of the task with accuracy. There are few approaches out there in the market already implemented in the crowdsourcing systems. So in this paper the focus will be to learn the trust management models and approaches in the crowdsourcing systems with managing the quality for the requesters.
- Aim of the project
Crowdsourcing is now a days becoming a main source to collect importance data which can be used in development of a project or task. The input is given by huge amount of people usually through internet. As it is has a high number of quantity, Management of trust is a problem which is faced while collecting the input from the user. There are some dishonest workers or attackers which floods the HITs with irrelevant input, making the information inaccurate. So in crowdsourcing environment, trustworthiness of a worker is important. This paper discusses about a systematic examination of trust management in Crowdsourcing systems by expanding existing trust management models for Crowdsourcing trust management also, leading broad tests to think about and investigate the execution of different trust management models in crowdsourcing. This report aims to expand existing trust management ways to operate them to work in Crowdsourcing systems. Understanding the current systems and explaining an existing stimulated test-bed in view of the system qualities of Amazon’s Mechanical Turk (AMT) to make assessment near handy Crowdsourcing systems. Next aim is to talk about the impact of consolidating trust management into Crowdsourcing system on the by and large social welfare and distinguish the difficulties and openings for future trust management inquire about in Crowdsourcing systems.
- Research Problem
Crowdsourcing empowers a gathering of obscure people from online networks to create new thoughts and make advancements. The crowdsourcing procedure includes two noteworthy parts, in particular, crowdsourcer and crowdsourcee. The previous patrons and posts an undertaking in a crowdsourcing stage by explaining their necessities, while the last acknowledges the task and finishes it by meeting the desires for crowdsourcers. Crowdsourcing is much of the time dangerous in light of the fact that group specialists are unverifiable. The procedure is additionally hard to control in a virtual condition. Reliable with the current writing, the present investigation characterizes chance as an extreme condition that debilitates an effective usage of a crowdsourcing venture. It is a mix of assessed misfortune sizes and disappointment likelihood. Hazard factors have been recognized and arranged in numerous distinctive settings, particularly in outsourcing.
Right now, trust management models have a tendency to receive a disseminated approach when searching for trustworthy communication accomplices because of the worry for adaptability and the data accessible in viable systems. Along these lines, additionally a disseminated approach is received, for example, allocating a software agent to every requester, to enable requesters to oversee the HIT designation process. As specified previously, customary trust management look into has proposed a considerable measure of models for choosing a solitary administration to finish an assignment. They are definitely not specifically pertinent on account of Crowdsourcing systems where numerous HITs should be outsourced to generously more than one labourer to exploit mass cooperation. This paper endeavours to expand existing trust management ways to deal with empower them to work in Crowdsourcing systems.
- Literature Review
Literature review concentrates on the best way to design mechanisms in crowdsourcing systems to induce workers to carry on agreeably have been completed. The paper studied is about the issue without adjusting existing crowdsourcing pricing models. In crowdsourcing systems, worker choice is a testing issue. Specialists have pushed the joining of notoriety administration into crowdsourcing systems to address this issue. Notwithstanding, current notoriety based choices frequently bring about concentrating the appointment of human knowledge undertakings (HITs) to few profoundly respectable specialists to diminish hazard. It clashes with the principle target of crowdsourcing systems which is to advance mass joint effort. In this paper, we propose a circumstance mindful approach – SWORD – to empower existing notoriety models to work in crowdsourcing systems. We comprehensively consider the targets of all partners in a crowdsourcing system (counting requesters, workers, and system administrators) to figure the HIT portion issue as an exchange off between quality and convenience, and propose an effective approach through requirement improvement to create answers for this issue with low computational multifaceted nature. Broad recreations planned in light of the real conditions from Amazon’s Mechanical Turk system shows noteworthy preferred standpoint of SWORD looked at to existing methodologies in enhancing social welfare. (Yu, et al., 2013)
The second literature is about a practical model for tackling issues, crowdsourcing has been generally connected in different human intelligence tasks, for example, data translation, data predictions and labelling. However, without sufficient trust management, countless workers submit low-quality or even garbage replies in the tasks to profit themselves or then again undermine their rivals’ crowdsourcing processes. The attacks or disturbance, not just essentially increment the cost of settling an assignment, yet additionally radically lessen the adequacy of crowdsourcing forms. Thusly, choosing dependable specialists to take an interest in undertakings has turned into a best need request in crowdsourcing conditions. To accomplish a viable reliable worker choice, three challenging sub-issues including context-aware trust evaluation, spam worker defence, and trustworthy worker recommendation must be handled. Accordingly, solution for all the three sub-challenges is proposed. (Ye, 2015)
- Significance and benefits
Trust management is about managing the quality of the work or of the related area where it is applied rather than just focusing on the quantity. Trust management in Crowdsourcing is used for maintaining the trust on quality of the information or input provided by number of people and ignoring or avoiding the attacks or spam workers away from the HIT. The main difficulty is to constantly maintain this trust for crowdsourcing environments. So, different techniques are developed to prevent the attacks and maintain the accuracy of the information collected in crowdsourcing. Solutions are discovered for some problems, but still there are some remaining. By studying on trust management in crowdsourcing environments in this report, proper knowledge will be gained and if possible some problems can be found. Moreover, the try will include to find the solution of this discovered problems, so that it can benefit the crowdsourcing environments.
To study about trust management and its uses in crowdsourcing environment, there are some methodologies which are needed to be used. Methodology is important as it helps to direct the research to an outcome, which can be negative or positive or can be neutral as well. To prepare this report, analytical methodology is used. Analytical methodology is all about observing the topic closely and study the topic in dept. Here the try is to study related research papers and get a good knowledge of the topic. Another aim is also to find problems in the research papers referred or any new findings if possible. The report is purely analytical based and on the basis of analysis outcome will be dependent.
- Acosta, M. et al., 2016. Detecting Linked Data Quality Issues via Crowdsourcing: A DBpedia Study, s.l.: IOS Press.
- Anon., 2015. Winners, losers, and deniers: Self-selection in crowd innovation contests and the roles of motivation, creativity, and Skills. Journal of Engineering and Technology Management, Volume 37, pp. 52-64.
- Anon., 2017. Inspiring crowdsourcing communities to create novel solutions: Competition design and the mediating role of trust. Technological Forecasting & Social Change, Volume 117, pp. 296-304.
- Chen, J. J., Menezes, N. J. & Bradley, A. D., n.d. Opportunities for Crowdsourcing Research on Amazon Mechanical Turk, Seatle, WA: s.n.
- Liu, S. et al., 2016. Exploring the trends, characteristic antecedents, and performance. International Journal of Project Management, Volume 34, pp. 1625-1637.
- Majchrzak, A. & Malhotra, A., 2013. Towards an information systems perspective and research agenda on crowdsourcing for innovation. Journal of Strategic Information Systems, Volume 22, pp. 257-568.
- Martinez, G. M., 2015. Solver engagement in knowledge sharing in crowdsourcing communities : Exploring the link to creativity. Research Policy, Volume 44, pp. 1419-1430.
- Simula, H. & Ahola, T., 2014. A network perspective on idea and innovation crowdsourcing in. Industrial Marketing Management, Volume 43, pp. 400-408.
- Ye, B., 2015. Trust Management in Crowdsourcing Environments. New York, NY, USA, IEEE, pp. 121-128.
- Ye, H. & Kankanhalli, A., 2017. Solvers’ participation in crowdsourcing platforms: Examining the impacts of trust, and benefit and cost factors. Journal of Strategic Information Systems, Volume 26, pp. 101-117.
- Yu, H., Shen, Z. & Leung, C., 2013. Bringing Reputation-awareness into Crowdsourcing. Tainan, Taiwan, IEEE, pp. 1-5.
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: