This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Two articles related to academic literacy development where chosen as the subjects for analysis, as academic literacy development is a core research interest of mine. This is due largely, to the poor state-of-affairs of education in South Africa. The aim of this evaluative study was to highlight both strengths and weaknesses of both articles in an attempt to gain a better understanding of good research in the field of academic literacy development. Both articles were chosen due to their relevance in my field of interest. Article one was chosen as it was written by the founder of the 'Reading to Learn: Learning to Read pedagogy, while article two was chosen as it conducted similar research to mine within the same university context. A structured 'article literacy checklist' was used as a starting point for the critical evaluation. However, due to severe word limitations, not all of the checkpoints were discussed in this assignment. It was found that both articles differed in terms of their strengths and weaknesses. For example, article one was strong in terms of its clear explanation of the methodology, results and findings whereas article two was strong in its literature review. In addition, article two showed some failings in its approach to the sampling procedure.
Both article one and two make use of a descriptive and process title. They both describe what the article will be about and identify clearly that an explanation of the process involved will be included. However, article one goes one step further by establishing that the article is not just a description, but also an evaluation; thereby, providing extra information to inform the audience about its relevance to their area of interest. Contrastingly, article two, seems to leave out this information. Perhaps this is due to the fact that the authors' focus of the article was more about the actual programme and not the evaluative aspect of its efficacy? Nevertheless, a mention of this could have been included in the title to better inform the reader of its relevance to those looking for a description and evaluation of the process of scaffolded approaches to reading and writing seeing as an evaluation was conducted.
Introductions, or opening statements to any research article serve to acquaint the reader to the context and nature of the problem to be investigated (Darley, Zanna & Roediger, 2003). This is achieved noticeably well in both articles as the authors provide a detailed understanding of the context of their research and the situation that their participants find themselves located in (crisis). Furthermore, the need for such a study is highlighted. However, only article one provides insight into the more complex descriptions of the actual research methodology and results. Furthermore, unlike article two, not only does article one clearly define what is to be realized within the article, but it also skilfully leads the reader from 'familiar' terminology to the more 'unfamiliar' technical language. (Darley et al, 2003). Seeing as most research is problem driven, both articles' introductions expertly highlight a crisis in education (lack of explicit teaching and writing) and both point to a similar lacuna (an absence/gap in pedagogic approaches in correcting this crisis).
Both articles have an abstract and are easily decoded. However, article one seems to provide a more in-depth abstract than article two, as article two leaves out any mention of an evaluative aspect within the paper. In addition, article two does not provide any keywords, which is a vital component of being able to retrieve information electronically.
Both articles clearly state that the research article undertook to explore a new/different approach to teaching academic literacy development. In addition, the goal of both articles was to explain a situation found in a certain context (poorer students' literacy development) and to test the efficacy of the pedagogic approach adopted. However, this was overtly stated in article one and not in article two.
An important rationale for a literature review is the need to formulate a proposal for your research you intend to undertake and convince your reader that your research is important (Hart, 1998). Moreover, according to Hart (1998), the appraisal of literature for your research provides a strong practical validation for your research and demonstrates an understanding of your topic at hand. This is very clearly demonstrated in article two which combines a thorough investigation into the theory of scaffolding learners reading and writing and an application of that theory to the South African context (Vygotsky, learning as a social process, Cummins BICS and CALP and so forth). The sourcing of other research in article two's literature review also provides academic weighting to their research. Contrasting article two, is article one, which still provides a literature review in its 'context and purpose of the research' section, but from a slightly different point of view. Seeing as Dr David Rose is one of the founding authors/researchers in the 'Reading to Learn: Learning to Read methodology, it is no surprise that there appears to be less citing of other previous studies in literacy development. Instead, his literature review seems to stem from his personal studies of twenty-five years of experience in researching poor literacy levels of non-native speakers.
Sampling and Research Design
Sampling can be a rather contentious issue as many researchers disagree in their process of selecting their sample units from the broader population and this may create equally controversial debates as to whether their findings can be generalized, or seen as accurate (Trochim, 2006). This is evident in article two which used a non-random sampling procedure and involved accidental /convenience sampling. According to Bouma and Ling (2004) this involves a study of a population that is immediately available. The authors of article two used the entire cohort of Science Access students at the University of KwaZulu-Natal (UKZN). An advantage of using this sampling procedure could relate to its simplicity (required little effort), and an alleviation of issues related to statistical reliability (Field, 2009). After all, the greater the sample size, the smaller the standard error in your findings! However, Bouma and Ling (2004) clearly state that accidental sampling may not provide a clear representation of the larger population of which you are trying to extract valuable information from. For example, research in the field of literacy development in South Africa needs to impact upon, and aid ALL disadvantaged learners, from immensely differing contexts. Students from impoverished backgrounds in the different provinces of South Africa have different barriers to learning; therefore, if the researchers in article two want to be able to take their findings and apply them to all Universities in South Africa, perhaps random sampling is required, as non-random sampling, according to Bouma and Ling (2004) provides only a weak basis for generalisation. However, if the intention of the authors was to investigate a pedagogic approach to literacy development within the context of their local university as a basis for further studies within the broader South African context, then this sampling procedure would be able to provide adequate data as it used the entire cohort of Science Access students at UKZN.
Article one made use of an action research design that investigated and evaluated a change in pedagogy. Rose (2008) mentioned that the researchers were the agents introducing the changes in pedagogy. This type of research was well atoned to the objectives of the research itself as it allowed the researcher to engage in reflective and reflexive practices (Pring, 2006). Furthermore, seeing as the researchers were more concerned with improving an already failing standard form of academic literacy pedagogy, the action research design was well suited to the goal of the research, which was to research an improved educational practice (Pring, 2006). Just like article two, the entire cohort of students were given the option to partake in the study but only 25 opted to be part of the research. Once again, a non-random, accidental sampling process was chosen but contrasting article one, article two clearly stated that the research findings were limited to one context and to be used at one University for the time-being.
Results (qualitative and quantitative), discussions and conclusions
It is assumed (no explicit mention) that article two made use of both qualitative and quantitative data as the tests required written work which then appears to have been codified. However, no mention of this is given in the results/findings, nor any mention of how the codification was done Article two presents their discussion of their findings in the form of an evaluation of the success of the science communication module. The authors are honest and forthright in their statement regarding the difficulties they faced in measuring the course's success, due to the complexities of literacy development, which must be commended. However, their actual findings are vague as a mention of an increase in performance by students in both the written and comprehension parts of the testing seem to contradict a later statement in the same sentence (p458). There could be a discrepancy in the interpretation of students (all students tested) and most students (not all students tested). Furthermore, four key problems were prominent within the findings.
Firstly, students are categorised into three different groups (weakest, middle and strongest). No mention or explanation is given as to how or why these groups were categorised. Secondly, table three provides a comparison of improvement. No indication is given as to whether this is still the mean scores (continued from table two) or in fact, the median scores. This could be crucial should the distribution of the scores not be normally distributed. If the data was in fact skewed, than the median scores would provide a better measure of central tendency. Thirdly, questions arise with regards to the validity and reliability of the tools of measurement. Field (2009) states that validity and reliability are properties of measurement that help ensure measurement error is kept to a minimum. In this particular article, issues of criterion validity influence the authority of the findings. This is because the tests implemented may not actually test reading AND writing skills of learners. Learners were being taught to read and write large pieces of scientific writing (essays and reports) but the test implemented was testing an improvement of these skills through the use of MCQ's (ok for testing comprehension) and short written tasks. These writing tasks required no more than 7-10 lines of written work which did not test genre conventions acquired or the ability to write longer texts. At the same time, the entrance tests are pitched at a pre-university level. Students, after one year of explicit scaffolding where given a similar test, still pitched at the pre-university level, which questions whether a 'learned-effect' influenced improvement and not only the intervention (Arrow, 1962). This brings to light issues of test-retest reliability (Field, 2009). Lastly, no actual test statistics are given in the findings to tell us whether the percentage of improvements (14%; 11% and 5%) are statistically significant in themselves. Therefore, they are purely descriptive in the presentation.
Article one is a lot clearer in terms of its research findings and results as it skilfully explains the measurement tool used to assess the writing tasks. Unlike article two, there does not appear to be negative issues related to criterion validity as tests set out to measure the efficacy of the pedagogic approach measure longer pieces of writing to test writing skills. The tests are further authenticated by the use of both qualitative and quantitative feedback. The results of both of these are very clearly laid out for the reader to view. Furthermore, the basis for the coding of the qualitative data was backed up by tried and tested methods used by the University of Sydney and research in the field of Linguistics. This allowed the rates of literacy improvement to be objectively measured. Article one also ranked their research population into three separate groups but a description of how and why this was done was explicit and allowed for greater understanding than article two. Article one provided a much clearer description/discussion of its findings and offered possible motivating factors for areas that did not correspond to the overall trend of progression. This, together with neatly laid out tables and graphs of the data, allow for higher levels of confidence in the objectivity of the action research. Furthermore, the findings of the action research were also linked to suggestions of how to improve literacy development amongst disadvantaged learners across Australia.
To conclude, both article one and article two provide good examples of research in the field of literacy development. In addition, they provide good models of how to, and how not to, report on such findings. Both may have differing strengths and weaknesses, but still provide a good example of how to conduct valid and objective research. The analysis of the two articles have provided a good foundation for my own research and have also provided an opportunity for me to alter my current research to provide more valid and reliable results.