Theories on Learning and Multimedia
Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Published: Mon, 12 Feb 2018
Computer-based instruction was used by the military to create standardize training and be more cost-effective (Shlechter, 1991). Computer-based instruction allows individual learners to pace the lesson content to meet his or her needs and provides the environment for self-directed learning (Lowe, 2002). Computer-based instruction can be defined as using computers to deliver, track, and/or manage instruction and when computers are the main mode of content delivery. The instruction can include text, images, and feedback. Software advances allow developers to integrate audio narrations, sound clips, graphics, videos, and animation into a single presentation and played on a computer (Koroghlanian & Klein, 2000; Moreno & Mayer, 1999). Instruction is classified as multimedia when sound, video, and images are included.
Multimedia incorporates audio and visual elements with the instruction (Craig Gholson, & Driscoll, 2002; Mayer & Moreno, 2003; Mayer and Sims, 1994; Mayer & Johnson, 2008). Audio components include narrations, which uses the student’s verbal channel of his or her working memory. Visual components include static images, animations using multiple still images, a video, and/or on screen text, which uses the student’s visual channel of his or her working memory. When the student receives the information from the verbal and visual channel of his or her working memory and relates the information from the two channels, then meaningful learning has occurred (Tempelman-Kluit, 2006). Meaningful learning is “developing a understanding of the material, which includes attending to important aspects of the presented material, mentally organizing it into a coherent cognitive structure, and integrating it with relevant existing knowledge,” (Mayer & Moreno, 2003). Meaningful learning or understanding occurs when students are able to apply the content they learned and are able to transfer the information to new situations or creating solutions to problems rooted in the content presented (Jamet & Le Bohec, 2007; Mayer and Sims, 1994). Allowing students to process and apply the information is essential for knowledge retention and meaningful learning.
“In multimedia learning, active processing requires five cognitive processes: selecting words, selecting images, organizing words, organizing images, and integrating.” – Mayer Moreno 2003
Multimedia instruction not only incorporates audio and visual elements it also has the capability of creating nonlinear content. Creating a nonlinear lesson allows the learner to have an active role in his or her learning and bypass sections they have already learned as well as go back and review sections if they need reinforcement. It is like putting the student in the driver’s seat and enabling them to reach the destination through a variety of paths versus sitting on a bus and stopping at each stop and waiting until they reach the destination.
Cognitive Learning Theories in Multimedia
Multiple multimedia learning theories and principles guide the creation process for multimedia presentations and facilitates student learning. Using the theories and principles guides the presentation creation process and facilitates students learning. The two overarching theories are cognitive load and dual coding. Several effects and __ related to the two main theories are: split-attention, redundancy, modality, spatial contiguity principle, temporal congruity principle and coherence principle. The four theories that are directly relevant to this study are: ___ ___ ___ and ___.
Add figure of org chart of principles & theories. Paivo, Sweller & Mayer. Mayer’s theory of multimedia learning
The working memory has a finite capacity for processing incoming information for any one channel, visual or audio. The combined processing, at any particular time, creates the working memory’s cognitive load ability (Baddeley, 1992; Mayer & Moreno, 2003; Chandler & Sweller, 1991). To take advantage of the memory’s capability it is important to reduce redundant and irrelevant information, thus reducing the cognitive load (Sweller, 1994; Ardaç and Unal 2008; Mayer & Moreno, 2003; Tempelman-Kluit, 2006). To keep the information efficient the multimedia should eliminate information that does not apply to a lesson or assignment. Content that is nonessential for transfer or retention should also be eliminated. Information needs to be concise by carefully selecting the text and images for the content and present the information succinct and organized in a logical pattern (Mayer & Moreno, 2003). Careful selection of text and images should be concise so content can be presented in a succinct and organized, logical pattern.
Grouping the information into smaller portions of information reduces the cognitive load. By chunking the information, the working memory has the opportunity process the content and makes connections with prior learning and knowledge. The information is then stored in long-term memory (Mayer & Moreno, 2003). After presenting a portion of the information, the multimedia presentation should include a brief activity to engage the student in processing and storing the information. Utilizing both the auditory and the visual channel of the working memory also helps with the cognitive load and content retention (Tempelman-Kluit, 2006).
Based on the information above about memory and processing the Cognitive Load Theory (CLT) was developed by Sweller (1993, 1994, 1998). “The theory assumes that people possess a limited working memory (Miller, 1956) and an immense long-term memory (Chase & Simon, 1973), with learning mechanisms of schema acquisition (Chi et al., 1982; Larkin et al., 1980) and automatic processing (Kotovsky et al., 1985),” (Jueng, Chandler & Sweller, 1997). Cognitive load theory provides a single framework for instructional design based on separate cognitive processing capabilities for visual and auditory information (Jamet & Le Bohec, 2007). Creating a multimedia presentation that conforms to CLT would integrate the auditory and visual information on the screen. The CLT presentation design limits the load on any one channel to prevent cognitive overload and increase learning (Kalyuga, Chandler, & Sweller, 1998; Mayer and Moreno 2002; Tindall-Ford, Chandler, & Sweller, 1997). Further research conducted by ____ ______ _____ indentified three separate types of cognitive load, intrinsic, extraneous, and germane.
Intrinsic cognitive load
The first type of cognitive load is intrinsic and is shaped by the learning task and the learning taking place (Van Merriënboer and Sweller, 2005). Intrinsic cognitive load occurs between the learner and the content, with the learner’s level of knowledge in the content area playing a factor. The other factors are the elements the working memory is processing at one time and element interactivity (van Merriënboer and Sweller, 2005). Element interactivity level depends on the degree to which the learner can understand the element information independently (Pass, Renkl, & Sweller, 2003). If you need to reduce total cog load (intri + extr + gemain) you need you need to know the elements and how to reduce loads. If the learner needs to understand several elements at once, and how they interact with each other, then the element interactivity is high. However, if the learner can understand each element independently then the element interactivity is low (Pass, Renkl, & Sweller, 2003). The intrinsic level occurs with the learner and their working memory and constructing meaning from the elements presented. While intrinsic load cannot be adjusted, the extraneous load can be modified.
- Give own example of high and low element interactivity.
- (van Merriënboer and Sweller, 2005) à intrinsic learning – schema construction and automation.
- Content element interactivity directly correlated to intrinsic cognitive load – ? (Pass, Renkl, & Sweller, 2003). Page 1 of article
Extraneous cognitive load
The second type of cognitive load is extraneous or ineffective and is affected by the format of the information presented and what is required of the learner. Extraneous cognitive load occurs when information or learning tasks have high levels of cognitive processing and impedes with knowledge attainment (Pass, Renkl, & Sweller, 2003). Extraneous cognitive load is also referred to as ineffective cognitive load since the cognitive processing is not contributing to the learning process. The working memory has independent two channels for processing audio and visual. If the instruction occurs only using one channel instead of utilizing both channels the learner will experience a higher level of extraneous cognitive load (van Merriënboer and Sweller, 2005). Extraneous cognitive load can be reduced by several effects studied as part of instructional design and cognitive load report as by Sweller et al., 1998 such as; split attention, modality, and redundancy (van Merriënboer and Sweller, 2005).
Germane cognitive load
The third type of cognitive load is germane and is also affected by design of the instruction being presented. While extraneous cognitive load accounts for information impeding learning germane cognitive load focuses on freeing cognitive resources to increase learning. Germane is also referred to as ineffective cognitive load. Germane and extraneous work together disproportionately. Designing instruction that lessens the extraneous cognitive load allows additional cognitive processing for germane load and increase students ability to assimilate information being presented (Pass, Renkl, & Sweller, 2003). Intrinsic, extraneous, and germane cognitive loads work together for a combined total cognitive load; this combined load cannot be greater than the available memory resources for a learner.
An experiment conducted by Tindall-Ford, Chandler and Sweller, 1997 had a purpose of measuring cognitive load. The participants were twenty two first year apprentices and had completed grade ten of high school. The participants were assigned to one of two treatments, visual-only instructions and audio-visual instructions. The experiment started with an instructional phase, which has two parts and was 100 seconds in length. Part one of the instruction phase had an explanation of how to read an electrical table and was either all visual, or was visual and audio with a cassette player. After the instructional phase part one, the participant rated the mental effort (load) based on a seven point scale.
Then the apprentices took part in a test phase which included three sections. The first section was a written test where participants filled in the blank headings in an electrical table. The second section contained questions about the format of the table. After the first part of instruction and two parts of testing, participants were given the same electrical table and participants had to apply information contained in the table to examples given. Participants had 170 seconds to study the information, then completed another subjective mental effort (load) survey. Then the participants complete the final section of the test phase. The apprentices had to apply the information and select the appropriate cable for an installation job with the given parameters. Apprentices had a two week break where they continued with their normal training. Then both the two part instruction phase and the three part test phase were repeated.
A 2 (group) X 2 (phase) ANOVA was run for the first instruction section and the first two sections of the written test in the test phase and significant difference was found with the audio-visual group performing better than the visual-only group. When the ANOVA was run for the mental load for the two phases significance was found again, with the audio-visual group rating the mental effort lower than the visual-only group. Similar results were found when analyzing part two of instruction mental load and section three of the written test for both phases. All test results revealed the audio-visual group outperforming the visual-only group for all tests and a lower mental load rating. Therefore the participant performance can be linked back to the cognitive load.
An experiment was conducted by Ardac and Unal, 2008 — finish later —
Based on the experiment above by Tindall-Ford, Chandler and Sweller, 1997, when selecting a format for a presentation audio-only is the better choice. This is true not only from a modality theory, it is also better from a cognitive load theory perspective, since visual-only formats cause a higher level of mental effort for participants.
Transition sentence that link split-attention effect as a part of cognitive load theory.
When images or animations are involved with the redundant text then the visual channel has to pay attention to multiple visual elements and the attention is split between the many visual pieces, creating the “split-attention” effect. Having several visual components such as text and animations causes an increase in the cognitive load and learning is hampered (Ardac & Unal, 2008). Split-attention occurs when instructional material contains multiple sources of information that are not comprehendible by themselves and need to be integrated either physically or mentally to be understandable (Jeung, Chandler & Sweller, 1997; Kalyuga, Chandler, & Sweller, 1998; Tindall-Ford, Chandler, & Sweller, 1997). Split-attention effect can be minimized by placing related text close in proximity to the image in the presentation or using audio narration for an animation instead of on-screen text (Jamet & Le Bohec, 2007).
One experiment conducted to test the split-attention theory was designed by Mayer, Heiser, and Lonn, 2001. In this experiment there were 78 participants selected from an university psychology subject pool. The experiment was a 2 x 2 design with summarized on-screen text as a factor and extraneous details as a second factor. There were four groups; no text/no seductive details group with 22 students, text/no seductive details group with 19 students, no text/seductive details group with 21 students, text/seductive details group with 16 students. The group had a median age of 18.4 and was 33% male. All participants a little prior knowledge of meteorology with a score of seven or lower out of eleven questions.
Participants viewed a computer-based multimedia presentation. The versions with text included a summary of the narration. The versions with seductive details included additional narrations with real world examples. The experiment started with participants completing a questionnaire to collect demographic and prior knowledge information. Then participants watched a presentation with one of the treatments at individual computers. At the completion of the video students completed a retention and transfer test.
Students who received on-screen text scored significantly lower on both the transfer and retention test than student who did not have on-screen text. These results are consistent with the split-attention theory as it relates to cognitive theory of multimedia. Students who received seductive details also scored lower on both the transfer and retention test than student who did not have seductive details. These results indicate that including seductive details to a presentation hampered student learning.
Another experiment conducted was by Tindall-Ford, Chandler, and Sweller, 1997. This experiment had thirty participants that were first year trade apprenticed from Sydney. The participants were randomly assigned to one of three groups, each group had ten participants. The first group was the visual only group that consisted of diagrams and related textual statements. The second group integrated the presentation included the textual statements however the statements were physically integrated into the diagrams. The third group is the audio-visual group included the same diagrams and however the textual statements were presented as audio instead of text.
The participants first read the instructional materials, the audio group listened to the information from an audio-cassette. Then participants completed a written test with three sections; a labeling section, a multiple choice section, and a transfer section, and finally participants completed a practical test. While analysis of the multiple choice section revealed no significant difference, the data indicated the audio-visual group performing better than the visual group. The section three data, the transfer test, had significant with the audio-visual and the integrated group performing better than the visual only. The findings revealed that the audio-visual and the integrated formats performed better than the visual only group. The non-integrated text performed the poorest out of the three groups, which supports the split-attention effect.
A set of two experiments were conducted by Mayer & Moreno, 1998 to verify split-attention and dual processing. The first experiment had 78 college students from a university psychology pool with little prior knowledge about metrology. The participants were randomly assigned to one of two groups. The concurrent narrations group (AN) had 40 students and the concurrent on-screen text groups (AT) had 38 students. Participants were tested in groups of one to five and were seated at individual cubicles with computers.
The participants first completed a questionnaire, which assessed the student’s prior knowledge and collected demographic information. Then the students watched the presentation about lightening formation; the students in the AN groups had on headphones. The presentation was 140 seconds long and included animation of the lightening process. The AN version had narration and the AT version had text on-screen that was identical to the narration, and used the same timings as the narration version.
After the presentation the participants had 6 minutes to complete the retention test, where participants had to explain the lightening process. Then they had 3 minutes to complete a transfer test, which consisted of four short essay questions. Finally the participants had 3 minutes to complete a matching test, where the students had to label parts of an image, based on the lightening formation statements provided. A split-attention effect occurred for all three tests, retentions, matching, and the transfer test; which the AN group scored higher on the matching test than the AT group. These results also align with dual-processing.
The second experiment by Mayer and Moreno, 1998 the content was changed to how a car’s braking system operates. The first experiment had 68 college students from a university psychology pool with little prior knowledge about car mechanics. The concurrent narrations group (AN) had 34 students and the concurrent on-screen text groups (AT) had 34 students. Participants were tested in groups of one to five and were seated at individual cubicles with computers. The participants first completed a questionnaire, which assessed the student’s prior knowledge and collected demographic information. Then the students watched the presentation about how a car’s braking system operates; the students in the AN groups had on headphones.
The presentation was 45 seconds long and included animation of a car’s braking process, and was broken into 10 segments. The AN version had narration and a brief pause between segments, and the AT version had text on-screen that was identical to the narration, and used the same timings as the narration version. The AT group’s text appeared under the animation and stayed visible until the next segment started. Then participants were randomly assigned to one of two groups. After the presentation the participants had 5 minutes to complete the retention test, where participants had to explain the braking process. Then they had 2.5 minutes to complete a transfer test, which consisted of four short essay questions. Finally the participants had 2.5 minutes to complete a matching test, where the students were given parts of the braking system and they had to identify the parts in an image and label them.
A split-attention effect occurred for all three tests, retentions, matching, and the transfer test; which the AN group scored higher on the matching test than the AT group. These results also align with dual-processing. – CONCLUSION!!! (318-319)
The experiments indicate the adding text in addition to the narration will impede student learning. The second experiment clarifies the split-attention effect, which if text is included it needs to be placed near the relevant part of the diagram. If text is not near the images, increase in the cognitive load occurs by trying to combine the images and text. The last two experiment further clarify the split-attention effect with three measures in two different experiments. Therefore narration should be used to accompany animation and images instead of text.
The working memory of a human has two channels a visual channel that processes information such as text, images, and animation through the eyes and an auditory channel that processes sounds such as narration through the ears. According to the “modality principle,” when information is presented in multimedia explanations, it also should ideally be presented auditorily versus on screen text (Craig, Gholson, & Discoll, 2002; Moreno & Mayer, 1999; Mayer, 2001; Mayer & Johnson, 2008; Mayer, Fennell, et al., 2004). When the information is presented auditorily, the working memory uses both channels, visual and auditory to process the information being heard and the information on the screen (Tabbers, Martens, & van Merriënboer, 2004). By utilizing both working memory channels, the mind can allocate additional cognitive resources and create relationships between the visual and verbal information (Moreno and Mayer, 1999). When learning occurs using both memory channels the memory does not become overloaded and the learning becomes embedded, this improves the learner’s understanding (Mayer & Moreno, 2002).
Several experiments have been conducted relating to modality theory. One experiment in a geometry lesson taught in a math class at the elementary school level focused on the conditions that modality effect would be occur. The researchers, Jeung, Chandler, and Sweller, (1997) created a three-by-two experiment that included three presentation modes and two search modes. The three presentation modes were visual-visual, audio-visual, and audio-visual-flashing. The visual-visual diagrams and supporting information were presented visually as on screen text; the audio-visual group diagrams and supporting information were presented visually. In the audio-visual-flashing group, the supporting information was presented auditorily and diagrams were presented visually. However parts of the diagram flashed when the corresponding audio occurred.
The two search modes were high search mode and low search mode. The high search mode labeled each end of the line separately so a line was identified by the letters at each end such as “AB.” Whereas the low search mode labeled the entire line with a single letter, such as “C” and reducing the search needed to locate the information. The experiment content was geometry; the study population was sixty students from year six in a primary school with no previous geometry experience, creating ten students per group.
The students participated in the experiment individually during class time. Students were randomly assigned to one of six groups the information was presented to the students on the computer. The experiment had three phases; an introduction phase where the problem was identified and was presented in one of the six modes as assigned to the student, an acquisition phase which included two worked out examples on the computer, after each example students were required to complete a similar problem with pencil and paper, and finally a test phase that included four problems for students to complete with pencil and paper. In the test phase they found a significant effect on presentation mode but not on the search complexity.
They performed additional data analysis to discover the significance between the presentation modes occurred in the high search group, but not the low search group. Analysis of the presentation modes for the high search group revealed that the audio-visual-flashing group performed a higher level of performance than the visual-visual group. The experiment confirmed the modality theory hypothesis that mixed mode presentation (audio-visual-flashing) would be more effective because the multiple modes increase the working memory capacity.
However these results were only found with the high search group and not the low search group. The group conducted two additional experiments to focus on high search and low search separately. The second experiment focused on high search. For this experiment, the population included thirty students from a Sydney public primary school who were in year six and had not been taught parallel line in geometry. The procedure was the same as before however the geometry content was a complex diagram. The groups were visual-visual, audio-visual, and audio-visual-flashing, with ten students were in each group. The results were consistent with modality theory and students who were in the audio-visual-flashing group performed better then the visual-visual group, and no differences were found between visual-visual group and the audio-visual group. Therefore for high search materials, the dual presentation mode increased performance when a visual reference was provided.
The third experiment focused on low search. In this experiment the population included thirty students from a Sydney public primary school who had not been taught parallel lines in geometry. The groups included visual-visual, audio-visual, and audio-visual-flashing, and ten students were in each group. The procedure was similar to the first experiment however the geometry content was a low search diagram and only contained two labels. The groups were visual-visual, audio-visual, and audio-visual-flashing, with ten students in each group. The results revealed that the modality effect did occur with the transfer problems and the visual-visual group took more time than the audio-visual and the audio-visual-flashing group. The difference was that with the low search content the audio-visual group performed better than the visual-visual group meaning, low search materials the flashing indicator is not as beneficial. The three experiments had demonstrated that using mixed modes of presentation increases the effectiveness of the working memory and capacity for learning. The results indicated that when content requires a high level of search, visual indicators need to be included to free up cognitive resources and increase memory capacity.
Therefore, based on the work of Jeung, Chandler, and Sweller (1997) when the computer multimedia presentations were created with a visual cue of a yellow box with a red outline was used as a visual indicator to assist users to locate where the mouse is clicking so students are not scanning the entire video screen for the mouse. In addition to visual references one version of the video included audio only and another version will contain text only to confirm the modality effect.
Selecting the most appropriate part of the working memory to disseminate the information and using the auditory channel to process information via audio instead of visual text allows the visual channel to use the working memory to focus on the images and animations that coincide with the audio. It is similar to watching a news program on television, your ears are listening to the news anchor and the working memory is processing that information while your eyes are watching the corresponding footage and the brain it combining the two pieces of information together. However if put closed captioning on you are reading the same information you are hearing which is redundant.
Redundancy effect can be defined as information being presented appears as both an image and as on-screen text, and the visual channel is responsible for all information while the audio channel is not used (Mayer, 2001; Barron & Calandra, 2003). “The distinction between the split-attention and redundancy effects hinges on the distinction between sources of information that are intelligible in isolation and those that are not. If a diagram and the concepts of functions it represents are sufficiently self-contained and intelligibly in isolation, then any text explaining the diagram is redundant and should be omitted in order to reduce the cognitive load (Kalyuga, Chandler, & Sweller, 1998).” Redundancy can occur with full text and full audio, full text and partial audio or partial text and full audio (Barron & Calandra, 2003). The redundant information may be duplicate text and narration, a text description and a diagram or on-screen text and audio narration. The duplicate information causes in increase in the learner’s working memory because the visual channel is processing the same information from multiple sources. (Kalyuga, Chandler, & Sweller, 1998; Mayer, Heiser and Lonn, 2001). The redundancy effect is evident when student performance is hindered when redundant information is present, and student performance increase when the redundant information is removed (Kalyuga et all, 1998; Mayer, Heiser and Lonn, 2001; Jamet & Le Bohec, 2007). The redundancy effect can be eliminated by presenting on-screen text as narration or presenting information as a diagram instead of a lengthy text explanation, and delivering information in a single mode that works complimentary with the other content be delivered (Mayer, Heiser and Lonn, 2001).
Several experiments have been conducted relating to redundancy theory. One experiment conducted by Jamet and Le Boec, 2007 was designed to test the hypothesis that redundancy effect would be observed with full text and narration, and presenting sequential text would reduce the redundancy effect. The experiment had 90 undergraduate students from a psychology pool in France, with a median age of 20. The participants were randomly assigned to one of three groups; no text, full text with corresponding audio, and sequential text. The experiment started with a prior knowledge test with four general questions and two specific questions. Then the participants viewed three documents about memory functioning, the presentation lasted about 11 minutes. After the presentation the participants took a retention test twelve open-ended questions. Then they took a transfer test with twelve inferential open-ended questions. Finally, the participants had to complete a diagram by labeling components.
Results revealed significance difference with the retention scores with the no-text group performing better than the full-text group and the sequential text group. Similar results were reported for the diagram completion portion of the experiment and the transfer task. There was no significant effect size to indicate that the redundancy effect would be reduced by presenting redundant text sequentially. There was a significant effect between the no-text and the other two groups for the transfer, retention, and the diagram test which validates the redundancy effect. Based on the findings from the experiment above, having on-screen text in addition to narration overloads the visual channel and decreases learning. The authors did point out that the participants had a difficult time understanding the documents presented and they could not control the presentation.
Another set of experiments were conducted by Mayer and Johnson, 2008 to test the redundancy theory. The first experiment focused on short redundant text that was display on-screen.
Cite This Work
To export a reference to this article please select a referencing stye below: