Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

As universities extend their blended learning offerings to reach more time- and place bound students, the degree to which blended students are successful, as compared to their classroom counterparts, is of interest to accreditation review boards and others charged with assessment. Instructors use information about the effectiveness of their instruction to evaluate and improve the learning experience. Therefore, the purpose of this study was to evaluate student performance and satisfaction of a blended learning approach to deliver a computer science course concerning the multimedia applications in comparison to delivering the same course content in the form of traditional classroom lectures. Eighty seven undergraduate students were randomly assigned into two teaching method groups: Classroom Lecture Instruction (CLI) and Blended Lecture Instruction (BLI). Each group received thirteen 95-min periods of instruction divided into 4 sections: a) 5-min briefly outline of the key learning points, b) 40-min lecture on general knowledge c) 45-min constructivist-inspired learning activities and d) 5-min summary on key learning points. In the beginning and the end of this study students completed a 31-item multiple choice knowledge examination. The additional measurements of course achievement that were collected included individual student's scores from three class exams plus the overall course grade. Finally, participants in both groups completed a satisfaction survey upon termination of the course after completing the post-test examination. Two-way analysis of variances (ANOVA), with repeated measures on the last factor, were conducted to determine effect of method groups (CLI, BLI) and measures (pre-test, post-test) on student performance. The time main effect was significant. Two paired-samples t test were conducted to follow up the significant time main effect. Differences in the post-test knowledge scores were remarkably greater than pre-test knowledge scores for the two groups. Furthermore, independent sample t-test analysis was conducted to measure students' satisfaction towards the CLI and BLI methods. Results indicated that a blended course delivery is preferred over the traditional lecture format. In addition, the accomplishment of the learning objectives, as measured by the final grade in the course, is dependent of the mode of instruction.

Key words: instructional technology, multimedia applications, blended instruction, traditional instruction, cognitive learning, satisfaction.


Distance education has become commonplace in today's tertiary education scene. One has only to look in course-listing books published by colleges and universities of all sizes to see that one or more technology-mediated courses are part of the curriculum offerings. Technology-mediated instruction includes a wide variety of instructional delivery methods including, but not limited to, teleconferencing, video teleconferencing, web-based courses, and distance courses. Teleconferencing and video teleconferencing are generally synchronous (occurring in real-time with instant communication) while web-based and distance courses can be synchronous, asynchronous (communication is delayed for example by email, blogs, or chat boards), or a combination of synchronous and asynchronous communication.

At one extreme are those institutions of tertiary education that are totally online offering programs and degrees exclusively via the Internet (i.e. University of Phoenix). At the other extreme are those institutions that remain totally traditional in their educational approach. Most universities, however, fall on a continuum somewhere between the two extremes maintaining a traditional view of education while incorporating online/distance courses into their existing programs.

The current trend to complement face-to face classes with web-based materials is known as "blended learning" (Tabor, 2007). This style of learning is normally defined as the integration of traditional classroom methods with online activities (Tabor, 2007; Macdonald, 2008).

Blended instruction is different than traditional instruction in that it employs a web-based curriculum and shifts the emphasis from a teacher-centered to a learner-centered philosophy (Harker & Koutsantoni, 2005, Schober, 2006). Further, blended instruction is different than distance instruction in that learners are required to meet as a group in a centralized location such as a classroom/lab with an instructor for a specified period of time. To help clarify the nuances among the various type of online courses, Allen and Seaman (2008) provide us the following definitions:

Table 1. Various type of online courses.

Proportion of Content Delivered online

Type of Course

Typical Description



Course with no online technology used - Content is delivered in writing or orally



Course that uses web-based technology to facilitate what is essentially face-to-face course. Uses a course management system (CMS) or web pages to post the syllabus or assignments for example (i.e. WebCT)



Course that blends online and face-to-face delivery. Substantial proportion of the content is delivered online, typically uses online discussions, and typically has some face-to-face meetings.

80% +


A course where most or all of the content is delivered online. Typically have no face-to-face meetings.

While there are a number of studies comparing traditional education to distance education, (Fortune, Shifflett & Sibley, 2006; Mansour & Mupinga, 2007; Olapiriyakul & Scher, 2006) there is little research comparing blended instruction to either distance or traditional education. The few studies that have been conducted suggest that blended instruction is more effective than either traditional or distance education in at least one facet of the studied program. For example, in Vernadakis, Antoniou, Giannousi, Zetou & Kioumourtzoglou's (2011) study the blended instruction proved more successful than traditional instruction in increasing student academic performance in a new technology in Physical Education course. Schober, Wagner, Reiman, Atria & Spiel's (2006) study suggested that the blended instructional model is more effective than traditional instruction at generating or increasing student interest of and motivation toward course content for a credit-bearing research methods course for graduate students. Harker and Koutsantoni's (2005) study pointed out that the biggest benefit of blended instruction over distance instruction in a non-credit bearing English for Academic Purposes course was the increased rate of student retention. Finally, El-Deghaidy & Nouby's (2008) study indicated that Pre-Service Teachers in the blended lecture group had higher achievement levels in their post-overall-course test, "comprehensive-score", and attitudes towards e-learning environments compared to those of the traditional lecture group.

Despite findings in current literature supporting the notion that blended instruction is more effective than either traditional or distance education, there are, in actuality, very few studies available to confirm or refute such conclusions, especially in Physical Education area (Vernadakis, Antoniou, Giannousi, Zetou & Kioumourtzoglou, 2011). Plus, the existing studies focus on different variables (Grade Point Average - GPA, student retention, student perception, student interest/motivation). The research proposed here will contribute to the overall literature in distance education and alternative forms of instructional delivery of curriculum; and more specifically it will add to the academic literature focused on comparing blended instruction to traditional instruction in Physical Education. More studies are needed assessing the effectiveness of blended instruction in general, and more specifically assessing the effectiveness of blended instruction in credit-bearing courses. This study contributes to that needed body of literature.

Although most research in distance education has examined the effectiveness of blended courses in the light of course grades and test scores, some researchers have contended that simply looking over grades was not sufficient to estimate the effectiveness of a course, since other factors such as student satisfaction might influence student achievement (Abdous & Yoshimura, 2010). Student satisfaction was considered an important indicator of the effectiveness of a course (Bolliger & Wasilik, 2009). Paechter, Maier & Macher, (2010) stressed the need to investigate the students' satisfaction criterion in order to fully understand the online learning environment.

Moreover, e-learning technology developed around the blended paradigm is beneficial for improving the quality of learning, but is useless if it is not based on pedagogical prescriptions (Alonso, López, Manrique & Viñes, 2005; Papastergiou, 2007). Pedagogical principles are theories that govern good educational practice. Both Thurmond (2002) and Oliver (2001) stated that the use of learning theories could contribute to the quality of blended courses by providing a framework for the development and implementation of appropriate teaching-learning activities. Woo and Reeves (2007) identified three main learning theories; behaviorism, cognitivism, and constructivism. Behaviorist learning theory focuses on observable behavior (objectivity) while cognitivism has a focal point on unobservable behavior (subjectivity). Constructivism emphasizes the construction of new knowledge by the learner, as well as a focus on active learner-centered experiences (Young and Maxwell, 2007). Presently, the educational environment is changed from teacher-centered to student-centered. Constructivism is a learning theory that could proved useful for designing and developing an blended learning program based on active learner-centered experiences (Low, 2007).

Therefore, the purpose of this study was to evaluate student performance and satisfaction of a blended learning approach to deliver a computer science course concerning the multimedia applications in comparison to delivering the same course content in the form of traditional classroom lectures. Constructivist design was applied in these approaches to help students develop constructive learning habits. In the blended learning approach 67% of content and activities delivered online computer-mediated communication and 33% of content and activities delivered through classroom face-to-face interaction.



The participants in this study were eighty seven (N=87) third-year undergraduate students from the Department of Physical Education & Sport Sciences at the Democritus University of Thrace taking an elective course titled "Information and Communication Applications - Multimedia Systems" in two successive years. Four classes were selected from two successive years for this quasi-experiment. These classes were taught and instructed by the same instructor according to the designed teaching plan throughout the entire course. Participants were randomly assigned to one of the two different teaching methods: CLI (24 males and 21 females) and BLI (22 males and 20 females) creating two independent groups of 45 (51.7%) and 42 (48.3%) students respectively. Prior to group assignments, participants were orientated to the purpose of the study, the experimental group to which they belonged, the method by which it taught and obligations for participation in the experiment. All students in the two classes were asked to participate, but the procedures were different for the two course delivery formats. Each student was asked to give consent to participation in the study and were informed that participation was voluntary.

The Course

The course under study was a semester-long, 2 credit-hour class, targeted at third-year undergraduate students in the Department of Physical Education & Sport Sciences. Its purpose was to introduce students to the fundamentals of multimedia design. The course provided students with the fundamental skills and knowledge to define a problem and design a multimedia application to solve it, to understand and recognize the characteristics of good multimedia design, to begin to use and apply popular multimedia development tools, and to work as part of a team to produce a workable multimedia solution.

Specifically, students in both environments (CLI, BLI) were required to build a prototype of their multimedia application in the initial stage of this course. In particular, each student was asked to assume the role of a Physical Education teacher working in a secondary school, and to prepare a video presentation aimed at introducing his/her pupils to a specific physical activity and life quality topic, chosen by the student. In the first 45-minutes of each class, the teacher lectured on the guidelines or mistakes and bugs of the video presentation frames. Then, the students had 50-minutes to discuss with their team members about how to implement what they learned. When the online classes were delivered, students could synchronously discuss and collaborate on the construction of their video presentations through online messenger and chat room. They could also asynchronously interact with team members in their exclusive forums. Moreover, when the classes were delivered in the classroom, students discussed and assigned their tasks in this physical learning environment. Students had to reconsider and modify the prototypes of their video presentations according to the new knowledge they had just acquired.

In this experiment, the instructor initiated students in CLI and BLI into the field of multimedia applications development, planning and creation. He first established the students' essential knowledge and developed required skills in the initial stage of the course. After students climbed the stiff learning curve and encountered bottlenecks, students were required to gather information and solve problems by themselves.

Course Management System

The Open eClass platform in version 2.1 was used to provide an alternative method of distributing information to the traditional method approach. This platform allowed the teachers to quickly organize practical on-line courses, contact student users registered to them, upload educational materials (texts, images, presentations, video, assignments, exercises, etc.), and create discussion forums where course participants could interact. Students on their part could have access to educational materials via the internet and participate in working groups, discussion forums and exercises (GUnet Asynchronous eLearning Group, 2010).

Users logged in the Open eClass platform by inserting their username and password, which allowed them to enter into their personal portfolio, an area that helped them to organize and control their eCourses participation in the platform.

On the eCourse home screen, there was a short description, in which basic information (title, code, responsible teacher, department etc.) were reposted. Also, there was an "email" hyperlink, which allowed registered student-users, who had defined their email address in their profile, to communicate with the course teacher via email. On the left, there was a menu with all the active eLearning tools (modules) provided for the eCourse by the teacher in charge (see Figure 1).

Figure 1. The asynchronous e-learning platform open eClass.

Upon completion of the eCourse, students could sign out from the Open eClass platform, by clicking on "Logout" on the right side, at the top of the screen.


Knowledge Test. A knowledge test was developed to determine students' achievement on cognitive learning of interactive multimedia systems. A table of specifications was developed to reflect the interrelationship between the identified course content and the levels of learning. Based on these specifications a 34-item, multiple-choice test was constructed. Each test item had four options in order to reduce the probability of guessing. The test construction was based on the linear model which required that the test scores were obtained by summing the number of correct answers with equal weighting over the items. The questions were written based on the book "Information Society and the Role of Interactive Multimedia" (Deliyannis, 2006).

After the questions were constructed as explained above, a panel of experts in multimedia systems teaching was used to evaluate and judge the content validity of the test instrument. This group reviewed the test items and established whether each item measured the target skill. Every time a set of changes was made, the questionnaire was reviewed again by the consultants, until the instrument was deemed adequate.

The revised version of the knowledge test consisted of a 31-item multiple-choice test. A pilot study was performed to access item difficulty and clarity of questions (Green & Salkind, 2007). Questions were scored one point (1) for a right answer and no point (0) for a wrong answer.

Satisfaction Scale. One of the best developed and most widely used student feedback questionnaires in the literature is the Student Evaluation of Educational Quality (SEEQ) (Marsh, 1982). The SEEQ is not based on student learning research but on psychometric analysis. A consequence of this is that while the constructs underlying the SEEQ are less well supported by learning theory, the psychometric characteristics of the questionnaire are developed to a high degree. Participants in this study completed a 12-items modified version of the SEEQ questionnaire (Centra, 1993) using a 5-point Likert scale with the following variables: strongly agree = 5, agree = 4, neutral = 3, disagree = 2, and strongly disagree = 1. The SEEQ has an exceptionally high level of reliability (Cronbach's alpha from 0.88 to 0.97). It also has a reasonable level of validity in that scale scores correlate significantly with a wide range of measures of learning outcome such as student marks on standardised examinations, student feelings of mastery of course content, plans to apply skills learnt on the course and plans to pursue the subject further (see Table 3).


Pilot study. A pilot study was conducted to determine the reliability and validity of the knowledge scale, to test the research procedures and to make any necessary revisions before full implementation of the study. The participants in this pilot were 38 undergraduate students enrolled in a blended course at Democritus University of Thrace. This population was chosen to keep the pilot study similar to the main study regarding participant's age. Participants were given two online 95-minute class periods of instruction and a face-to-face overview concerning the interactive multimedia systems. The knowledge test was administered on the fourth day at the computer lab facility on the university campus. Eighteen Windows-based computer workstations were used in the knowledge test implementation. Each computer had access to an online selecting answers system for completion and submission of the 34 multiple choice questions. Participants completed the knowledge test in a section-by-section manner, that is, after the completion of one question, the participant was asked to click a next button to go to the next question, until all questions were completed. The questionnaire was also designed with an embedded program so that if a participant chose to skip any item, a remark designed using JavaScript appeared requiring the participant to complete the missing item before he or she proceeded to the next section. After completion of the entire questionnaire, the participant clicked on a submit button, which sent the completed questionnaire to a secure server accessible only by the researchers. It was determined that participants would need approximately 30 minutes to complete all questions of this instrument.

Main study. After the pilot study, a main study was conducted to compare the scores obtained by 87 undergraduate students in the knowledge test, the three written exams, the final course grade and the satisfaction survey. The knowledge test was administered on the first day to measure participant's learning on the interactive multimedia systems. Procedures for the knowledge test were the same as the pilot test. There were three questions less, reducing the number of questions to thirty one (see Table 2).

On the second day, the computer lab facility was set up according to the needs of the experimental procedure. In this facility there were 18 Windows-based multimedia computer workstations with the same infrastructure (hardware, software) and Internet connectivity. Computers were separated as much as possible to create individual workstations. Before the experiment started, the BLI group was given a 95-minute introductory session on how to use the open eClass platform and its tools. Then, the responsible instructor of the course gave a 45-minute lecture to all participants introducing the unit of "Information and Communication Applications - Multimedia Systems". Instruction, practice (activities), and testing for this study were held on thirteen separate and successive weeks. The groups met for 95-minutes, each week.

The CLI method incorporated a direct style of teaching including lectures, activities, and discussion. Participants attended a typical live lecture that provided ample opportunity for teacher-student interaction (reviewing the lecture material through discussion). During the lecture, PowerPoint slides were used to present textual information, graphics, and a few animations. Immediately after the lecture, students were given computer activities to enhance and enrich teaching and learning in the computer lab. Specifically, each TLI group received six 95-min periods of instruction divided into 4 sections: a) 5-min briefly outline of the key learning points, b) 40-min lecture on general knowledge, c) 45-min constructivist-inspired learning activities that corresponded with the lecture content and d) 5-min summary on key learning points. Participants were allowed to work alone or with a partner. Oral instructions (feedback) could be given during the 45 minutes of activity.

Participants in the BLI method implemented both the combining asynchronous educational activities in the internet and traditional training activities in the classroom. The experimental structure of blended designing was followed on a one to three ratio (1/3). Five (5) instructive units were accomplished with the traditional teaching method in the classroom, while the remaining eight (8) units with the use of asynchronous course management system open eClass. The five (5) traditional activities functioned as completion of each instructive unit (an educational goal), which ended, and at the same time introduced students to the next instructive unit. Each BLI group received thirteen 95-min periods of instruction divided into 4 sections: a) 5-min briefly outline of the key learning points, b) 40-min e-lecture on general knowledge (video feed of the lecturer synchronized with PowerPoint slides), c) 45-min constructivist-inspired e-learning activities that corresponded with the e-lecture content and d) 5-min summary on key learning points. A member of the university assistant staff was present for organization and management supervision only. Participants were allowed to work alone or with a partner.

At the end of the treatment, the knowledge test that previously served as a pre-test was given to students as a post-test. After completing the post-test knowledge examination, the participants in both groups completed the SEEQ scale. The additional measurements of course achievement that were collected included individual student's scores from three class exams during the course plus the overall course grade. Both groups had the same learning conditions, such as topics and principles introduced in the treatments, and equal opportunities to achieve their learning outcomes.


This study measured the effectiveness of a blended general multimedia course at Democritus University of Thrace in the Department of Physical Education and Sport Sciences. Research methodology employed a quantitative, pretest-posttest control group design. Use of intact classrooms, where students are not individually assigned to groups, denotes a quasi-experimental research design. In this study, students' knowledge acquisition was measured along with end-of course class satisfaction, exams and course grades. A pretest-posttest control group design is one of the strongest methodological research designs, assuring that significant differences discovered between and among groups can be attributed to the intervention. With this research design, threats to internal and external validity are controlled and generalization to other similar settings is possible (Green & Salkind, 2007).

Specifically, the experiment on knowledge test was a factorial design with teaching method groups (CLI and BLI) and repeated measurements (pre-test and post-test) as independent variables, and knowledge learning as dependent variable. The experiment on satisfaction, exams and course final grade determination was a factorial design with teaching method groups (CLI and BLI) and post-test measurement as independent variables, and student's scores from the satisfaction scale, the three class exams and the overall course grades as dependent variables.

The research questions of this study were:

Should one or more items on knowledge test be deleted or revised to obtain a better measure of interactive multimedia systems?

Do students, on average, report differently on knowledge test using the CLI and the BLI teaching approaches?

Do students, on average, report differently on knowledge test for the pre-test and post-test measurements?

Do the differences in means for knowledge test between the CLI and the BLI teaching method groups vary between the pre-test and post-test measurements?

Are students more satisfied on the average about CLI or BLI teaching approaches?

Do students, on average, report differently on class exams using the CLI and the BLI teaching approaches?

Do students, on average, report differently on overall final course grade using the CLI and the BLI teaching approaches?


An item analysis using the responses of the pilot study was conducted to determine the difficulty rating and index of discrimination. In determining the internal consistency of the knowledge test, the alpha reliability method was used. Two-way analysis of variance (ANOVA), with repeated measures on the last factor, were conducted to determine effect of method groups (CLI, BLI) and measures (pre-test, post-test) on knowledge acquisition. Independent sample t-test analyses were conducted to measure students' satisfaction, class exams scores and final course grades towards the CLI and BLI methods. Each variable was tested using an alpha level of significance .05. The results of each analysis are presented separately below.

Item Analysis of the Knowledge scale

The pilot study knowledge test had a mean difficulty rating of 54%. When all items were analyzed, two questions, or 5.9% of the items, had unacceptable difficulty rating values. The utilization of a difficulty rating criterion of between 10% and 90% resulted in 94.1% of the items yielding an acceptable level of difficulty. The pilot study knowledge test had a mean index of discrimination of .32. When all items were analyzed, one question, or 2.9% of the items yielded an unacceptable index of discrimination values. The acceptable value for index of discrimination was .20 or higher. Acceptable index of discrimination values were observed for 97.1% of the items. As indicated by the information in table 2, three of the items (18, 30, & 34) were therefore deleted from the test for the main study.

Table 2. Summary of Item Analysis for pilot study knowledge test.


Index of discrimination

Difficulty rating


Which of the following is a picture layer




Which of the following is not a stage in multimedia




Which application require extreme realism including moving images




The Bezier is a




Which of the following changes the position of picture




Which of the following changes the orientation of the picture




Which of the following changes the size of the picture




Antialiasing is




Aspect ratio is




Authoring software is




AVI is




Frame is




DPI is




Hyper media is




Adapter cards are




Which is the computer software graphics




Which of the following device is used for interaction with computer model




Which of the following techniques are used to brighten the parts of image closer to observer




JPEG stands for




Kiosk is




Morphing is




MP3 is




PNG is




Sound editor is




Which of the following is used in all image generation




Multimedia environments is




Capacity of DVD runs in




Features of Modeling Tools are




Which of the following is not a feature of OCR




In which form the drawing area contents are printed on matrix printer




Which technique is useful to teach and to entertain




Selection feedback is implemented using




In a multimedia project, a storyboard is




With reference to multimedia elements, of the following, pick the "odd-one" out




Reliability of the Knowledge and Satisfaction scales

Reliability measures for knowledge test and satisfaction survey were assessed. An alpha reliability coefficient .77 was computed based on the inter-item correlation coefficients of the pilot study knowledge test. While the Cronbach a coefficients of the satisfaction scale was .92, with all > .70. According to Green, & Salkind (2007), the reliability coefficient should be at least .70 for the test to be considered reliable. Thus, the determination was made that the pilot knowledge test and the satisfaction survey were reliable measurement instruments.

Knowledge test Comparison

There were no significant initial differences between the two teaching method groups for the mean knowledge test scores, t (85) = .31, p = .83. A significant main effect was noted for the Time, F (1, 85) = 34.97, p < .001, while the interaction Time X Group was no significant, F (1, 85) = 6.74, p = .37. The univariate test associated with the Group's main effect was also no significant, F (1, 85) = 6.25, p = .34.

Two paired-samples t tests were conducted to follow up the significant Time main effect and assess differences across time at each teaching method group. Differences in mean rating of knowledge test in CLI group were significantly different between pre-test and post-test, t (43) = 4.45, p < .001. Similar, differences in mean rating of knowledge test in BLI group were significantly different between pre-test and post-test, t (41) = 5.37, p < .001. The magnitude of the effect as assessed by Cohen's d was small to medium d=.032 for CLI and medium d=.046 for BLI. As shown in figure 2, the post-test knowledge scores were remarkably greater than pre-test knowledge scores for the two groups.

Figure 2. Groups' performance on all measures of the Knowledge test.

Student Course Satisfaction

To compare student course satisfaction, at completion of the course, all participants completed a satisfaction survey which consisted of a modified SEEQ (Centra, 1993). All of the 12 questions that comprised the SEEQ were rated higher for the blended course design (Table 3). A composite score for the SEEQ was calculated, and the overall mean was higher for the blended course (44.79) than the traditional course (39.80).

Table 3. Means and standard deviations for post-test scores of the two groups on satisfaction.


Mean (SD)*


Mean (SD)*

T value


Class size is appropriate

4.45 (0.63)

3.93 (0.95)

The class activities were engaging

4.11 (0.78)

3.69 (0.98)

The class environment was inviting

4.10 (0.82)

3.49 (1.07)

The class was fun

3.66 (0.90)

3.02 (1.17)

I was bored in class

2.72 (1.09)

3.19 (1.21)

I enjoyed going to class

3.16 (1.01)

2.80 (1.17)

I felt comfortable to voice my opinion in class

3.72 (0.92)

3.13 (1.09)

I learned from my peer experiences

3.42 (1.00)

2.89 (1.07)

I felt my presence was valued in the class

3.41 (0.97)

2.71 (1.11)

I felt comfortable approaching the instructor

3.97 (0.96)

3.83 (1.04)

The instructor encouraged class discussion

4.21 (0.72)

3.57 (1.08)

I would recommend this class to a friend

3.86 (0.96)

3.55 (1.26)

Composite Student Evaluation Score (Q1 - Q12)

44.79 (10.76)




* 1= strongly disagree, 2=disagree, 3=don't know, 4=agree, 5=strongly agree

Significant differences for total mean scores of SEEQ are reported in Table 3. The total scores between the blended (44.79) and traditional (39.80) were significantly different (p< .001) indicating that blended students judged the quality of education to be higher than traditional students

Class exams and Final Course Grade

Four independent-samples t test were conducted to determine significant differences between the blended and traditional students on class exams and the final course grade (Table 4). No significant differences were noted in the first written exam, while the blended students significantly outscored traditional students in the second [t (85) = 2.21, p = .016] and third [t (85) = 3.01, p = .005] written exam. Final course grade was significantly higher for blended students than traditional students, with the former mean score of 81.53 and the later of 74.47 (p = 0.001).

Table 4. Independent-samples t test for traditional and blended section on class exams and final course grade.

Source of Variation


Mean (SD)*


Mean (SD)*

T value


First written exam

69.24 (2.04)

70.69 (1.11)



Second written exam

78.15 (1.37)

73.63 (1.08)



Third written exam

79.48 (1.52)

74.36 (0.95)



Final course grade

81.53 (1.23)






The present study was designed to examine differences that may occur when individuals learn a motor skill under different instructional teaching methods and replicated previous findings by showing differential performance dependent on teaching methods. With regard to the knowledge test, all groups improved their cognitive learning in volleyball setting skill, after instruction. Post-test results indicated no significant differences between the groups concerning the written test. Nevertheless, the attitude test scores of the CI group were more favourable to MCAI method than TI method. Retention test results showed that groups retained the knowledge acquisition. However, the combine method of instruction tended to be the most effective on cognitive learning.

In conclusion, multimedia programs can be utilized to enhance the effectiveness of teaching strategies or techniques in physical education classes. Computers can be used for the teaching of the cognitive aspects of sports such as rules and scoring procedures, and to allow teachers to have more time to spend with students' motor skills. However, these conclusions are limited for students aged 12 - 14 years old. More studies should be conducted to investigate the effect of MCAI in different ages and for various sport activities. Also, it is critical to continue researching into how students learn in different technological environments. Subsequent research with more attention to software design and validation may begin to show more consistent results in the use of MCAI software. These results lend support to the supplemental role of MCAI to more traditional instructional methods in the context presented in this investigation.


Abdous, M., & Yoshimura, M. (2010). Learner outcomes and satisfaction: A comparison of live video-streamed instruction, satellite broadcast instruction, and face-to-face instruction. Computers & Education, 55(2), 733-741.

Alonso, F., López, G., Manrique, D., & Viñes, J. M. (2005). An instructional model for web-based e-learning education with a blended learning process approach. British Journal of Educational Technology, 1, 217-235.

Bolliger, D. U., & Wasilik, O. (2009). Factors influencing faculty with online teaching and learning in higher education. Distance Education, 30(1), 103-116.

Centra, J. A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco: Jossey-Bass Inc.

El-Deghaidy, H. & Nouby, A. (2008). Effectiveness of a blended e-learning cooperative approach in an Egyptian teacher education programme. Computers & Education, 5(3), 988-1006.

Deliyannis, I. (2006). Information Society and the Role of Interactive Multimedia. Athens: Fagotto.

Fortune, M. F., Shifflett, B., & Sibley, R. E. (2006). A comparison of online (high tech) and traditional (high touch) learning in business communication courses in Silicon Valley. Journal of Education for Business, 81, 210-214.

Green, B. S., & Salkind, J. N. (2007). Using SPSS for Windows and Macintosh (5th ed.). New Jersey: Prentice Hall.

GUnet Asynchronous eLearning Group (2010). Platform Description (Open eClass 2.3). Retrieved December 18, 2010, from http://eclass.gunet.gr/manuals/OpeneClass23_en.pdf.

Harker, M., & Koutsantoni, D. (2005). Can it be as effective? Distance versus blended learning in a web-based EAP programme. ReCall, 17(2), 197-216.

Low, C. (2007). Too much e-learning ignores the latest thinking in educational psychology. Retrieved December 9, 2010, from http://www.trainingreference.co.uk/e_learning/e_learning_low.htm

Macdonald, J. (2008). Blended learning and online tutoring (2nd ed.). Hampshire, UK: Gower.

Mansour, B. E., & Mupinga, D. M. (2007). Students' positive and negative experiences in hybrid and online classes. College Student Journal, 41, 242-248.

Marsh, H. W. (1982). SEEQ: a reliable, valid, and useful instrument for collecting students' evaluations of university teaching. British Journal of Educational Psychology, 52, 77-95.

Olapiriyakul, K., & Scher, J. (2006). A guide to establishing hybrid learning courses: Employing information technology to create a new learning experience, and a case study. Internet & Higher Education, 9, 287-301.

Oliver, R. (2001). Developing e-learning environments that support knowledge construction in higher education. In S. Stoney & J. Burn (Eds). Working for excellence in the e-conomy. (pp. 407-416). Churchlands: Australia, We-B Centre.

Papastergiou, M. (2007). Use of a Course Management System Based on Claroline to Support a Social Constructivist Inspired Course: A Greek case study. Educational Media International, 44(1), 43-59.

Paechter, M., Maier, B., & Macher, D. (2010). Students' expectations of, and experiences in e-learning: Their relation to learning achievements and course satisfaction. Computers & Education, 54(1), 222-229.

Schober, B., Wagner, P., Reiman, R., Atria, M., & Spiel, C. (2006). Teaching Research Methods in an internet-based blended-learning setting. Methodology, 2(2), 73-82.

Tabor, S. (2007). Narrowing the distance: Implementing a hybrid learning model for information security education. The Quarterly Review of Distance Education, 8(1), 47-57.

Thurmond, V. A. (2002). Considering theory in assessing quality of web-based courses. Nurse Educator, 27(1), 20-24.

Vernadakis, N., Antoniou, P., Giannousi, M., Zetou, E., & Kioumourtzoglou, E. (2011). Comparing hybrid learning with traditional approaches on learning the microsoft office power point 2003 program in tertiary education. Computers & Education, 56(1), 188-199.

Woo, Y. & Reeves, T. C. (2007). Meaningful interaction in web-based learning: a social constructivist interpretation. Internet and Higher Education, 10, 15-25.

Young, L. E., Maxwell, B. (2007). Teaching nursing: theories and concepts. In: Young, L. E., Paterson, B. L. (Eds.). Teaching Nursing: Developing a Student-Centered Learning Environment (pp. 8-19). Philadelphia: Lippincott Williams and Wilkins.

Adams, T., Kandt, G., Throgmartin, D., & Waldrop, P. (1991). Computer - Assisted Instruction vs Lecture Methods in Teaching the Rules of Golf. Physical Educator, 48(3), 146-150.

Akhtar, A. (2003). A study of interactive media for deaf learners in post 16 education. Paper presented at the Instructional Technology and Education of the Deaf Symposium, Rochester, NY. Retrieved July 13, 2009, from http://www.rit.edu/~techsym/papers/2003/W3A.pdf

American Sport Education Program (2001). Coaching youth volleyball (3rd ed). Human Kinetics, Champaign IL.

Antoniou, P., Moulelis, E., Siskos, A. and Tsamourtzis, E. (2006). Multimedia: an instructional tool in the teaching process of alpine ski. Current Developments in Technology-Assisted Education. (pp. 941-945). Retrieved July 17, 2009, from http://www.rit.edu/~techsym/papers/2003/W3A.pdfhttp://www.formatex.org/micte2006/pdf/941-945.pdf.

Cotton, K. (1991a). Computer-assisted instruction (SIRS Close-up No. 10). Portland, OR: Northwest Regional Educational Laboratory. Retrieved July 12, 2009, from www.nwrel.org/scpd/sirs/5/cu10.html