The authors of this research paper compare approaches toward technology-assisted learning and extend prior studies to include a longitudinal field experiment in an attempt to obtain less equivocal results. In addition, the authors attempt to define the term 'technology-assisted learning' as a medium to support and enhance learning rather than seeing it as a replacement for more conventional methods. Their research into whether a technology-assisted learning platform can be effective for learners including successful learning outcomes is based upon a learning system (website) which supports university students in Hong Kong in their pursuit to learn the English language.
In recent years technology-assisted learning has become an essential component for commercial training and academia. Institutions have implemented technology-assisted learning platforms in an effort to enhance learning and teaching (Kwan et al. p.1). Although an area of continual growth there are still concerns over how effective such a platform can be as well as its role in student learning outcomes (Allen, 2006). Despite the benefits of accessing content from any location, research into the outcomes of technology-assisted learning such as Ladyshewsky (2004) has lead to ambiguous results. Platforms have been launched by institutions without comprehensible data to suggest whether technology-assisted learning is an effective means of learning.
Get your grade
or your money back
using our Essay Writing Service!
In an attempt to obtain more comprehensible data to support technology-assisted learning Jen-Hwa Hu et al. (2007) extend studies conducted by others such as Larkin and Budny (2005) to support their hypotheses. The authors also make use of Kolb et al.'S (1990) Learning Style Model and carry out a longitudinal field experiment on an interactive online English website.
Jen-Hwa Hu et al. (2007) adopt several data collection techniques and research methods to support their hypotheses (of which there are seven) including study design, dependent variables and measurements, control-groups, treatment-groups and data collection.
Quantitative approaches include setting the subjects (students) an English Language test based upon the Likert scale of assessment. They also set the subjects an individual assessment based on strict scripting and questioning. The authors proceeded to collect the data and took the average from each individuals test scores to approximate objective learning effectiveness. They assessed the results and used the data to support the idea that technology-assisted learning is more successful than face-to-face methods of learning.
Despite some quantitative approaches, the authors have made significant use of subjects to support their hypotheses and thus it is clear that this paper relies mostly on qualitative methodologies. The authors utilise the learning system already in place and randomly place the students into testing groups dependent on their timetabled lessons. They then proceed to give each group a program of study.
The control-group was taught using the face-to-face method whereas the treatment-group had access to online material to support their face-to-face sessions. The authors claim that by randomly placing students into groups based upon nothing more than the student's allocated lesson times it should ensure the results are not biased toward their outcomes. However, Jen-Hwa Hu et al.'S (2007) research can be scrutinised based upon the way in which they chose to place students into groups. Even though they created two random groups of students and later in their paper revealed that one group (face-to-face group) leaned toward abstract and reflective learning compared with the technology-assisted learning group, the authors did not assess the student's levels of autonomy, intelligence or indeed their understanding of the English language.
The English language course, as noted by the authors is a compulsory component of any university degree taken in the Hong Kong University. By failing to factor in this element when placing students into groups could have lead to biased results. Although intentions were made obvious by the authors to keep these groups as random as possible, this particular variable could have had a significant impact on the outcome of learning effectiveness. Evidently this significant variable was not quantified.
Additionally, upon discovering that the groups were biased in their formation i.e. one group had a favoured learning style which is key to assessing the effectiveness of learning and student outcomes, the authors did not change the groups or repeat their experiment for a second time with could of lead to weighted results in favour of their predetermined outcomes.
Always on Time
Marked to Standard
As noted within the paper the authors' motivation for their research is to explore and extend previous studies in technology-assisted learning to assess why earlier results lead to ambiguity. Furthermore, their approach of using a longitudinal field experiment as well as exploring and evaluating learning assessment measures is aimed at producing results which are not as ad hoc in evaluation as prior studies. The authors are motivated to introduce a more theoretical approach to the study, making use of qualitative research rather than relying solely on statistical data and models (such as Kolb's Learning Style Model) to formulate a basis for research. By providing more concise and usable data future learning-assisted platforms could prove more effective.
The authors formulate seven hypotheses which, in summary, focus upon technology-assisted learning leading to greater learner satisfaction, course Learnability, comparable learner satisfaction (with face-to-face learning) and a feeling of less community support. The authors propose bridging the gap of previous studies by assessing outcomes and learning effectiveness in the context of language learning by using subjects from a university to support their research rather than predetermined models (Jen-Hwa Hu et al. p. 11). Making use of real life subjects in a natural environment rather than creating a controlled experiment just for the purposes of research improves the validity of the results obtained and should, theoretically, lead to ambiguities found in previous results being explained.
1.5 Author Evaluation
Upon evaluating their solution the authors acknowledge that they formulated seven hypotheses but did not find supportive evidence for three of them. Furthermore, the authors found that their initial approach of using Kolb's Learning Style Model lead to the production of discriminatory results and thus they used Cronbach's alpha to make the results of the subject study more reliable.
The solution of combining a theoretical and qualitative approach to an area which has been largely assessed on a quantifiable basis, that is, to assess the effectiveness of a technology-assisted platform on a technical basis, is rather successful. The authors assessed the effectiveness for students and chose a study group which would be motivated to take advantage of such a platform. Reversely, using subjects studying a higher education course could be subjective as one would assume that their motivation for learning is high therefore they would perform well through both face-to-face and technology-assisted learning methods.
Although the ideas of the paper are strong and their hypotheses predominantly proven one cannot use their research as an 'ideal solution'. As noted their study group lacked randomness and the authors changed their instrument of measurement halfway through the experiment to try to obtain more reliable results. Their solution was satisficing and their evaluation did not fully 'explain away' the previous ambiguities of prior research in the area.
The paper made diminutive contributions which could form the basis of more extensive research. Firstly, the authors did discover that some learning styles respond more positively to technology-assisted learning whereas others do not. The authors also found greater reliability through their longitudinal experiment which offered some insight into the ambiguity of previous studies.
1.7 Clarity of Research
Despite the authors taking a more qualitative approach toward an area surrounded by evaluating the technical aspects the paper did have unclear elements. Firstly, the authors attempted to answer seven hypotheses through one experiment and limited methodology. One would expect that one or two hypotheses would be assessed and further research would focus on the other hypotheses (particularly those that were eventually unsupported by their research). The authors appeared to want to make a significant breakthrough in the area of technology-assisted learning but only provided some evidence which could be used to support further research. The experiment itself appeared limited and the results would only be relevant to a specific audience learning the English language. Therefore, other researchers and experts within the field could only use this research in a limited way.