health and social care

The health and social care essay below has been submitted to us by a student in order to help you with your studies.

The Journals Impact Factor Health And Social Care Essay

The journals impact factor is at present, considered a yard stick for measuring the relative quality and significance of a journal. It is defined as the frequency with which the ‘average article’ in a journal has been cited in a particular year or period. Despite the recognition that the impact factor is an imperfect measure and forty-five years of criticism, there is no obvious alternative. Thus, those forced to use this tool for direct journal comparison should be encouraged to remain open-minded and cautious, with an awareness of the inherent limitations of its use. Extension of journal-impact-factor data to individual articles and authors is inappropriate and should be avoided. Some of alternative indices of impact factor (Thomson Reuter) include Google Scholar, PageRank, H-index, Y-factor, Faculty of 1000, Eigen Factor etc. Some of these alternatives may be more accepted than impact factor in future.

Key words: Impact factor, h-index, citation, Alternative

Background

The concept of citations as tool for ‘evaluating’ science was first proposed by Eugene Garfield in 1955 (Garfield, 1955). As only a limited number of journals could be included in the Thomson Reuters (TR) databases (currently numbering about 10500), analyses based on such a limited dataset (also selected in a non-transparent way by the TR) has been widely and severely criticized by both the developed and developing countries (Molloy, 2007). Although having been widely criticized, the impact factor (IF) published in the Science Citation Index Journal Citation Reports by the Institute for Scientific Information is the most commonly used bibliometric criterion. It quantifies the influence of a periodical on secondary publications (Garfield, 1999), and is commonly used not only to rank and evaluate journals, but also for academic promotion or for the selection of research grant applications (). There were simultaneous efforts to find alternative indicators using the TR databases, and through other innovative methods. Some of these include Google Scholar, PageRank, H-index, Y-factor, Faculty of 1000, Eigen Factor etc. (Satyanarayana, 2010).

Impact factor

The impact factor was first described in 1955 by Dr. Eugene Garfield (Jacso, 2001; Lundberg, 2003) and was used in the early 1960s to help select journals for what would evolve to become the Science Citation Index (Garfield, 1999). The Science Citation Index, a commercial property of the Institute of Scientific Information (Philadelphia, Pennsylvania) (Opthof, 1997), is used to generate the Journal Citation Reports, produced annually.

The IF is a simple descriptive quantitative measurement of a journal’s performance computed on the basis of the average number of times articles from the journal published in the past two years have been cited in the current year. It is calculated from this equation: Journal X’s 2009 impact factor = Citations in 2009 (in journals indexed by Thomson Reuters) to all articles published by Journal X in 2007–2008 divided by Number of articles deemed to be “citable” by Thomson Reuters that were published in Journal X in 2007–2008 (Gisvold, 1999).

The journal IF is currently calculated by Thomson Reuters based on citation data from the 6650 plus journals indexed in the Web of Science database, which is then reported in the Journal Citation Reports (JCR), a database that lists the journals as per their citation ranking (Lundberg, 2003).

Impact factor is calculated using the following formula:

Impact of impact factor

Ever since the appearance of the JCR in 1972, there has been attempts to use the IF data for comparisons of science, scientists, groups of scientists, scientific disciplines, countries and, of course, scientific journals (Satyanarayana & Sharma, 2008; Seglen, 1997). The IF is primarily meant to be an indicator of the success of a paper in a journal and a surrogate of its direct application in subsequent research. Such wide and indiscriminate application of IF and citation data often resulted in lopsided and unacceptable quality judgements, especially on the science and technology capability and strengths of nations led to severe and serious criticism of the very use of citation-based data for purposes other than journal evaluation. Despite wide and sustained criticism, citation data and IF continued wide application by researchers to choose journals for reading and referencing and more importantly, tracking rivals’ publications and citation profiles to remain competitive. Journal editors and publishers just love impact factors and they use the IF as a major USP for pricing and selling the journals at their will (Kurmis, 2003; Monastersky, 2005).

Librarians continue to rely on impact factors and other citation data for deciding which journals to subscribe. Potential employers use citation-based parameters to evaluate candidates’ bibliography for decisions of hiring. Many institutions and Universities all over the world continue to use the citation data for assessment of academic excellence, promotions, awards and rewards. Funding agencies also seek citation indices from applicants to evaluate projects for support. Learned societies and national science academies and other such bodies conferring awards and rewards all over the world use citation data for decision making (Balaram, 2009).

Incorrect Application of Impact Factors

The quality of an individual scientific research paper is an extremely difficult concept to define and quantify (Bloch and Walter 2001). The frequency of citation has been adopted as a rough indicator of quality (Saper 1999). Although a high citation rate may not always be associated with high quality, most citations in most papers are not refuted or discredited by the authors of the paper (Callaham et al. 2002). Thus, it is still widely accepted among authors that citation of work by others imparts a degree of prestige and professional recognition (Reyes 1998).

While impact factors may be useful for the qualitative evaluation of journals, the usefulness does not extend to individual articles. In fact, it has been reported that 50% of citations recorded in the Science Citation Index come from just 15% of articles published (Walsh and Weinstein 1998) and that the most cited 50% of articles account for approximately 90% of citations (Seglon 1997). Thus, the impact factor of a journal is likely to be largely influenced by a small percentage of its published articles (Hansson 1995). Similarly, it is important to note that the impact factor does not reflect the quality of the peer-review to which a journal subjects its articles (Neuberger and Counsell, 2002).

The Institute of Scientific Information itself suggests that the primary utility of the Journal Citation Reports is to assist librarians and researchers in managing journal collections. In addressing the extension of this tool to academic evaluation, the Institute of Scientific Information states that, while the impact factor may provide a gross approximation of the prestige of journals, it does not advise using this value as the sole means of comparative evaluation. Misunderstanding of the impact factor and inappropriate weighting of its importance have affected the author-journal relationship, often greatly influencing authors’ selection of the journals to which they submit their manuscripts (Linardi et al. 1996). Many authors may be tempted, or feel pressured, to select the highest impact-factor-rated journals likely to accept their article for publication while rejecting journals whose target audience may in fact be more suitable and receptive to the publication itself (Meenen 1997).

Limitation of impact factor

Though impact factor is widely accepted globally, it is also criticized as well for some limitations it possess. Some of the limitations of impact factor are discussed in the following:

1. Impact factor clearly favors journals which publish work by authors who cite their own forthcoming work and who are geographically situated to make their work readily available in preprint form. The measure punishes journals which publish the work of authors who do not have membership of these invisible colleges and is virtually incapable of detecting genuine impact (McGarty, 2000).

2. The second calculation problem is statistical in nature: the JIF calculates the mean number of citations to an article in the journal in question. However, many authors have found that citation distributions are extremely skewed. Seglen (1997) for instance found the most cited 15% of papers to account for 50% of citations and the most cited 50% for 90% of the citations. Hence on average the most cited half of papers are cited nine times as much as the least cited half.

3. The impact factor can be influenced and biased (intentionally or otherwise) by many factors.

4. Extension of the impact factor to the assessment of journal quality or individual authors is inappropriate.

5. Extension of the impact factor to cross-discipline journal comparison is also inappropriate.

6. Those who choose to use the impact factor as a comparative tool should be aware of the nature and premise of its derivation and also of its inherent flaws and practical limitations (Kurmis, 2003).

7. It must be recognized that the Science Citation Index includes only approximately 5000 journals (Lankhorst & Franchignoni, 2001) of an estimated world total of 126,000 (Whitehouse, 2002; Seglen, 1997); thus, it represents <4% of all journals. Journals not listed in the Science Citation Index database are often crudely referred to as having no impact factor (zero). This suggests, incorrectly, that 96%, or 121,000, of journals are never formally cited.

8. Citation Index do not contribute to impact factor calculations (Talamanca, 2002; Callaham et al., 2002). Seglen reported that, within the field of mathematics, publications that were not included in the Science Citation Index database were cited more frequently than were publications that were included (Seglen, 1997).

9. Review of the journals included in the Science Citation Index database has also shown an enormous bias toward those published in English (Bloch & Walter, 2001; Neuberger & Counsell, 2002; Whitehouse, 2002; Golder, 1998; Winkmann et al., 2002), with non-English-language journals given lower impact factors (Rogers, 2002; Dumontier et al., 2001).

10. Differences in citation (Saper, 1999) and referencing (Linardi et al., 1996) tendencies within individual fields limit the validity of cross-discipline comparison. For example, it has been reported that the mean number of references per article of biochemistry periodicals is three times that of mathematics periodicals (Linardi et al., 1996). Some fields encourage lengthy reference lists, whereas others dictate more concise or restricted bibliographic listings (Sieck, 2002). Because of this, Linardi et al. (1996) suggested that comparisons of journals on the basis of their impact factors should be limited solely to intra-area evaluation; they warned that inter-area comparisons may be both inappropriate and misleading.

11. Ease of access to journals, publication immediacy, and type of publication material have all been identified as contributors to the impact factor. The availability of journals to authors and researchers can vary (Curti et al., 2001). Theoretically, journals published more frequently (Linardi et al., 1996) may be more readily available for citation or may reduce publication lag. The fact that a journal or article is available electronically may also increase the rate of citation and thus the impact factor.

12. The type of research being reported can affect the journal impact factor because of citation limitations. Scientific articles tend to cite only scientific articles, whereas clinical articles cite both scientific and clinical articles, thus allowing a much larger pool for citation. In a similar context, general journals tend to have higher impact factors than specialist journals because of the larger pool for citation (Hecht et al., 1998; Saper, 1999).

13. Finally, those who choose to use the impact factor as a measure of quality must recognize that the Institute of Scientific Information is a private for-profit company that enjoys an unchallenged monopoly on the market of citation-frequency recording. Thus, despite the valuable contribution that this company has made to the scientific community, it does have a commercial interest in the development and application of its products, which may not always align itself with pure academic intent (Rogers, 2002; Sieck, 2002).

Recommendation for improving impact factors of Journals

Lack of impact factor does not necessarily indicate poor quality, unacceptability and lack of novelty in the research work published. It is obvious that there are published a good no of novel and exciting papers in Bangladeshi journals, but lacking of online availability those are not duly appreciated and cited. To improve citation and impact factor, the following recommendations can be suggested –

1. Like many other journals around the world, Bangladeshi journals can suggest their authors to cite a number of (5-10) articles from Bangladeshi Journals related to their topic and it can be considered as added benefit in accepting a manuscript. This will increase the citation ratio and h-index, hence impact factor of the journals.

2. Rapid online publication of all journals and articles.

3. Search engine optimization for the published article.

4. Scientists and researchers of Bangladesh should try to cite more indigenous publications in their papers wherever found relevant.

5. Research articles published in local journals should be circulated more extensively throughout the country in print version and by e- mail.

6. Researchers of Bangladesh should regularly visit and study papers published in local journals which is currently highly unsatisfactory.

7. Local journals should improve their review and publication process making it quicker to publish a paper so that indigenous researchers feel interest to publish their work in local journals.

8. Journals should try to be indexed in worldwide accepted journal systems and archives and databases such as ISI, SJR, Pubmed, Elsevier etc.

9. More review articles should be published as these articles attract more readers and are cited more than research reports. Therefore, review articles can raise the impact factor of the journal and review journals will therefore often have the highest impact factors in their respective fields.

10. Journals may choose not to publish minor articles, such as case reports in medical journals, which are unlikely to be cited and would reduce the average citation per article.

11. Journals may change the fraction of "citable items" compared to front-matter in the denominator of the IF equation. Which types of articles are considered "citable" is largely a matter of negotiation between journals and Thomson Scientific. As a result of such negotiations, impact factor variations of more than 300% have been observed. For instance, editorials in a journal are not considered to be citable items and therefore do not enter into the denominator of the impact factor. However, citations to such items will still enter into the numerator, thereby inflating the impact factor. In addition, if such items cite other articles (often even from the same journal), those citations will be counted and will increase the citation count for the cited journal. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. "Letters to the editor" might refer to either class.

12. Journals may publish a large fraction of their papers, or preferentially papers which they expect to be highly cited, early in the calendar year. This gives those papers more time to gather citations.

13. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor.

Alternative Indices of journal impact

Right from early 1970s, there have been serious attempts to study the limitations of IF and other citation-based indices and to device alternative matrices that can address the deficiencies to make the evaluation exercises more objective. As early as 1976, a recursive impact factor and tried to compute and analyze citation data to give citations from journals that have high impact greater weight than citations from low impact journals was proposed (Narin & Pinski, 1976). The increasing web-based access to and use of scholarly literature through powerful search engines as Google has facilitated the development of innovative methods and tools to rank scholarly journals. Such methods have helped further refine the evaluation of both science and scientists both within and outside the citation-based systems. Some of these include Page Rank, Weighed Page Rank, h-index, g-factor, y-factor, Euro Factor, Faculty of 1000, Eigen factor etc. (Resnick, 2004). There have also been several attempts to apply parameters other than IF to study the issue of ‘popularity’ vs ‘prestige’ of journals, a major limitation of the IF and other citation-based indices. Many studies have also been done to compare the citation based data with the new and improved methodologies (Dellavalle et al., 2007). One such comparative analysis has shown that Y-factor ranking has helped overcome at least one significant limitation of the IF i.e., the higher ranking of review journals as compared to original research papers (Satyanarayana & Sharma, 2008).

Google Scholar:

Google Scholar (http://scholar.google.com) is a free-to-use search engine developed in 2004 essentially to locate information from learned journals and other sources on the Web. Due to its easy availability, Google Scholar is perhaps one of the most widely used tools by scholars in all disciplines of science and technology. Some special functions of the Google Scholar include the ‘cited by’ option that provides links to other articles that have cited this paper, and more. It is often difficult to obtain relevant information quickly due to absence of sifting according to quality. The major limitations of the search engine are that not all records retrieved are peer reviewed and therefore quality is difficult to judge. Also, there is lack of clarity on how the sources themselves are selected, content analyzed, the time span covered how the listing is done (Satyanarayana, 2010).

PagerankTM

PageRank is a software system for ranking web pages developed by Google and has also been applied to rank research publications. The advantage with this tool is that it uses a broad range of open data sources from the Google Scholar (GS) etc. that can locate and retrieve large number of records. PageRank algorithm addresses is the issue of ‘popularity’ and expert appreciation or ‘prestige’ of published research that remains the major limitation of other databases like SCI through the Weighed PageRank. Popular journals are those that are cited frequently by journals could be with little prestige. These journals therefore could have a very high IF and a very low weighted PageRank. Prestigious journals, on the contrary, are those may not be frequently cited, but their citations come from highly prestigious journals. These journals may have a very low IF but a very high weighted PageRank. Analysis of journals according to their ISI IF and their weighted PageRank shows significant overlaps and differences.

h-index and g-index

The h-index was introduced by Hirsch (2005) and is defined as follows: “A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np-h) papers have no more than h citations each.” As a result the h-index provides a combination of both quantity (number of papers) and quality (impact, or citations to these papers) (Glänzel, 2006). Therefore, the h-index is preferable to simply measuring the total number of citations as it corrects for “one hit wonders”, i.e. academics who might have authored (or even be the 20th co-author of) one or a limited number of highly-cited papers, but have not shown an academic performance that has been sustained over a longer period of time. The h index is also preferable over a simple measurement of the number of papers published as it corrects for papers that are not cited and hence can be argued to have had limited impact on the field. In sum, the h-index favours academics that publish a continuous stream of papers with lasting and above-average impact (Bornmann & Daniel, 2007). Hirsch index thus measures the quality and sustainability and diversity of scientific output and thus addresses the problems with the SCI where a methodological paper could fetch the highest impact. A major limitation is that scientists who are very productive tend to have lower H number.

A disadvantage of the h-index is that it ignores the number of citations to each individual article over and above what is needed to achieve a certain h-index. Therefore an academic or journal with an h-index of 6 could theoretically have a total of 36 citations (6 for each paper), but could also have more than a 5,000 citations (5 papers with 1,000 citations each and one paper with 6 citations). Of course, in reality these extremes will be very unlikely. However, it is true that once a paper belongs to the top h papers, its subsequent citations no longer “count” (Braun, 2005).

Hence, in order to give more weight to highly-cited articles Leo Egghe (2006) proposed the g-index. The g-index is defined as follows: [Given a set of articles] ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations. Although the g-index has not yet attracted much attention or empirical verification, it would seem to be a very useful complement to the h-index.

The h-index and g-index have several important advantages over the Thomson ISI JIF. First of all, these indices do not have an artificially fixed time horizon. Second, the h-index, and to a lesser extent the g-index, attenuates the impact of one highly cited article, because – unlike citations-per-paper measures such as the JIF – the h-index and g-index are not based on mean scores. H-index measures the overall citation impact of the journal, not in the citation impact of one or two highly cited individual papers in that journal. h-index for journals provides a robust measure of sustained and durable performance of journals, rather than articles. Third, both the h-index and g-index are influenced to some extent by the number of papers that a journal publishes. A journal that publishes a larger number of papers has a higher likelihood of generating a higher h-index and g-index since every article presents another chance for citations (Saad, 2006).

The Y-factor

The Y-factor is a simple combination of both the IF and the weighted PageRank. Significantly, the authors claim that the resulting journal rankings correspond well to a general understanding of journal status. For example, while the IF ranking lists five review journals, the Y-factor column had none. Two primary research journals Cell and the Proceedings of the National Academy of Sciences USA, rated highly by peers, figure in the Y-factor list (Satyanarayana, 2010).

Faculty of 1000

Peer ranking of research papers outside the citation number game has also been tried and a prominent one being the Faculty of 1000, a subscription-based literature awareness tool. Faculty of 1000 comprehensively and systematically highlights and reviews the most interesting papers published in disciplines as biology, medicine etc., based on the recommendations of thousands of carefully chosen researchers.(http://f1000biology.com/ about/faq). These Faculty members evaluate papers based on their perceived merit than where they appear to evolve a consensus. The limitations : the mode of selection of the faculty itself as also the choice of papers considered to be of high quality as the journals sample is about 1000 only. The final F1000 Factor is consensual incorporating the ratings it receives and the number of times it is selected by different Faculty Members. Outstanding work thus gets its deserved peer recognition irrespective and independent of citation counts (Meho, 2009).

E i g e n f a c t o r

Developed by Carl Bergstrom, the Eigenfactor (Bergstrom et al., 2008) provides an online suite of tools that “ranks journals much as Google ranks websites”. The data are taken from the Thomson Reuters databases. Available at no charge, the Eigenfactor is considered a measure of the journal’s total importance to the scientific community. The “Article Influence” metric within the Eigenfactor is comparable to the impact factor, but that is just one aspect of the broader framework.

Other initiatives-

Other current initiatives include the MESUR (MEtrics from Scholarly Usage of Resources) project supported by Andrew W. Mellon Foundation, a two year effort to enrich “the toolkit used for the assessment of the impact of scholarly communication items, and hence of scholars, with metrics that derive from usage data” (Banks et al., 2008). The MESUR is considered the most comprehensive effort until now to study article impact evaluation techniques visa- vis modern scholarly communication practices that have undergone a sea change over the last decade.

Conclusion

While the impact factor may, in certain circumstances, be a useful subjective tool for grading journal quality, it is not appropriate for quality assessment of individual articles or authors. The impact factor is a tool whose usefulness is waning, but there is not yet a fully viable alternative to it. Thus, when using impact factor for comparison of journals, caution should be taken considering the inherent limitations of impact factor.


Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay


More from UK Essays