Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
A REVIEW OF THE LITERATURE
This review comprises two parts. I begin with the research on teacher judgment (TJ), reporting on the rationale, methods, types of assessments and key finding of the existing teacher judgment literature related to literacy instruction. I also summarize findings concerning the role of student and teacher moderator variables in influencing TJ, highlighting empirical gaps that provide the basis for my research questions. Second, I discuss key studies on developmental aspects of reading, writing, and spelling and also the findings of studies of teachers’ understandings of related concepts.
Research on Teacher Judgment of Students’ Literacy Skills
In both mainstream media, and in scholarly research, there has long been concern and debate about the extent to which teachers are prepared to guide the learning of typical students and to ameliorate the struggles of atypical learners (e.g., Moats, 1994 & 2009; Snow, Burns & Griffin, 1998; Spear-Swerling & Brucker, 2004; Stotsky, 2009; Torgesen, 2002). The bodies of declarative knowledge and baseline competencies for teachers have therefore long been the subjects of scholarly inquiry in multiple disciplines. Shulman (1986) offered a widely cited conceptual framework for understanding the competencies of effective teachers in various disciplines. Grounded in his earlier experience with the training and assessment of the clinical skills of medical students, Shulman proposed a model of teacher competency that is multi-faceted. He theorized that teachers’ competence includes, but is not limited to, such broad categories as domain specific content knowledge, pedagogical knowledge, curriculum knowledge, knowledge of educational contexts, and knowledge of learners and their characteristics. A defining characteristic of Shulman’s model was his notion of pedagogical content knowledge (PCK) – knowledge that includes ways of representing and formulating subject matter that make it comprehensible to others. Such an ability necessarily presumes mastery of a great deal of relevant content knowledge (CK). As Shulman (1986) put it, “to understand what a pupil understands,” requires “a deep grasp of both the material to be taught and the processes of learning” (p. 19).
Still another of Shulman’s categories, “knowledge of learners and their characteristics,” includes judgment acumen – the ability of a teacher to accurately estimate what a student knows and can do. Although not sufficient for effective teaching, Shulman and others (e.g. Artelt & Rausch, 2014) have described teacher judgment (TJ) as a key competence in the context of teaching and learning because it is a prerequisite for adequate classroom organization and for adaptive teaching. Shulman (1987) conceptualized pedagogical content knowledge (PCK) as including (but not limited to) accurate judgment ability.
Defining Teacher Judgment
In lay terms, the term judgment may connote opinions of a person’s moral character. In other arenas, such as competitive individual sports like ice skating or diving, the term also implies subjective opinions about, for example, creative expression and execution of required moves. In law, as well, judgment also signifies an opinion which, while based on evidence, is still subjective. In other cases, judgment denotes something slightly different – and relates instead to an ability to accurately forecast an outcome which is purportedly uninfluenced by opinion. In the medical field for example, Doust (2012) described clinical acumen as the ability to recognize “among all the women who complain of feeling tired, the one who has life-threatening Addison’s disease” or “among all the children [practitioners] see with diarrhea, the one with Crohn’s disease requiring urgent surgery.”
In teacher judgment studies, the word judgment is closer in meaning to its use in the medical example. In some studies, the phrase “diagnostic competence” is also used (refs). Accordingly, in the TJ literature, the term diagnostic competence refers to teachers’ ability to judge student achievement correctly. Less frequently, the term has also been used to refer to teachers’ ability to correctly judge task demands (Artelt & Raush, 2014). In this review, I use the terms teacher judgment and diagnostic competence interchangeably.
According to Shulman, accurate estimation skills are enormously important for teachers because such information regularly informs the moment-by-moment instructional decisions that teachers make. If a teacher is only correct in sensing that a learner is struggling mightily, but has no real sense of the particular source of a student’s difficulty, the provision of a poorly chosen example may only compound the difficulty or be only as useful as no help at all. Evaluation, he said, includes informal checking for understanding as well as more traditional formal assessments that may assess students’ understanding at the end of lessons or units. Both types, he wrote, require knowledge and knowledge transformation. “To understand what a pupil understands,” Shulman said, requires “a deep grasp of both the material to be taught and the processes of learning” (p. 19).
It is certainly arguable whether poor instruction belongs in the same category as misdiagnosis of Addison’s or Crohn’s disease, but it is certainly true that students deserve classroom teachers with keen insight into their unique struggles. Shulman’s ideas have resonated in the writing of others who have focused on the particular declarative and pedagogical content knowledge considered important to effective general early literacy instruction and necessary for teachers in diagnosing and intervening effectively with reading, writing, and spelling difficulties (e.g. Moats, 2009; Snow, Griffin, and Burns, 2007; Stotsky, 2006). Such knowledge includes, but is not limited to, understanding of the continuum of phonological development and the complex orthography of English and the ways it encodes speech. Research on teacher judgment in the particular cases of reading, writing, and spelling has resulted in what Coladarci called a proverbial glass half full (1992).
Teacher Judgment Research Design Considerations
Artelt and Rausch (2014) emphasized that when looking at individual differences in teacher judgment accuracy, it is important to take into account features of the judgment task. In their 1989 meta-analysis of the TJ literature, Hoge and Coladarci (1989) examined the results of sixteen studies (provide more details about each meta-analysis study since they are widely cited and pivotal) distinguished between cases in which teachers were asked to make direct versus indirect judgments. In studies where teachers made direct judgments, they were asked to predict scores on a particular test or even performance on particular items of a test. In contrast, in studies that involved indirect judgment tasks, teachers were asked, for example, to rank order students overall and then those ratings or rankings were correlated with students’ performance on a test with which the teacher may or may not have been familiar. In sixteen of the studies they reviewed, investigators reported on the match between teacher estimation of student ability and student performance on another objective criterion-referenced measure of the same construct, measured concurrently with another measure given on or about the same date. Their operational definition for accuracy was the correspondence between two sets of values: the teachers’ judgments of their students in some area and their students’ actual performance on related standardized test.
In a more recent meta-analysis of teacher judgment studies, Südkamp, Kaiser and Möller (2012) categorized teacher judgement studies by similar design characteristics. For example, they also distinguished between designs requiring direct versus indirect judgments but used slightly different terms: informed vs uninformed judgments. Tasks were informed, they said, in studies where teachers judged performance on an item-by-item basis. In contrast, uninformed judgments were more similar in nature to judgment tasks called indirect by Hoge and Coladarci (1989). In both meta-analyses (S, K & M and H & C), teachers were reported to be more accurate in their predictions when the judgment task was more direct or informed. The median correlation on uninformed tasks? Or overall in Südkamp et al.’s meta-analysis of over 75 studies was .53, and in studies where TJ was informed, they reported correlation coefficients that were even higher. As an example, they cite a study by Feinberg and Shapiro (2003) (need more details of this study) who found a correlation of .70 between students’ test performance and direct/informed teacher judgment (tell what teachers actually judged to make this more clear) whereas the correlation with indirect/ uninformed teacher judgments was .62. Such findings, they said, were in line with the findings of Hoge and Coladarci (1989), who reported correlations ranging between .28 and .92 and a median correlation of 0.66.
Although such a correlation suggests a moderate to strong correspondence between teacher judgment and student achievement, Hoge and Coladarci (1989) were careful to point out that absolute accuracy actually varied widely depending on which teacher judged which students on which tasks. Furthermore, Hoge and Coladarci explained, twelve of the sixteen studies used pooled data, lumping all teachers and/or all students together to see if overall, teachers were accurate judges of students’ ability / achievement. According to the authors, inherent in this method is the risk of over- or under-estimating what teachers actually know about what their students know.
Other types of statistical procedures, they said, provide more specific and potentially useful information about TJ. One such alternative approach involves the calculation of “hit rates.” In studies using this approach, researchers have investigated how well teachers predicted individual children’s performance on an item-by-item basis. To calculate a hit rate, the total correct judgments made (at the item level) is divided by the total judgments made. In one such study by Doore (2010), preschool teachers were asked to judge whether their students would respond correctly to individual items on an alphabet subtest of the Test of Early Reading Abilities – Third Edition (Reid, Hresko, & Hammill, 2001). Doore compared teachers’ concurrent judgments with students’ actual performance on individual items and calculated a hit rate of 70.1%. At first glance, such a figure lends support to the conclusion that teachers can and do make reasonably accurate estimates of what students know. Other investigations into the source of variation may lead to different conclusions. Consider, for example, the research on student characteristics as moderator variables in teachers’ judgment accuracy.
Student Characteristics as Moderator Variables
In the introduction to their meta-analysis, Südkamp et al. (2012) made (review and make sure you use past tense throughout when describing what investigators did or said) the important point that “on the one hand, the combined results of the last 30 years of TJ research “may be interpreted as indicating that teachers’ judgments are quite accurate; on the other hand, their judgments are evidently far from perfect, and more than two thirds of the variance in teachers’ judgments cannot be explained by student performance” (p. 744). Efforts to understand such variance have been the explicit focus of many TJ studies. The designs of such studies have involved estimates based on correlation but have also included statistical techniques aimed at addressing what Cunningham, Stanovich, and Maul (2011) called the “third-variable problem.” When a correlation between two variables varies widely (e.g. TJ accuracy and student achievement), a natural response among those attempting to explain such variance is to look for third variables that explain it. Through the use of regression based statistical modeling techniques, Cunningham et al. explained how the correlation between two variables can be recalculated after the potential influence of other key variables is removed, or “factored” or “partialed” out (p. 53).
Characteristics / third variables hypothesized as possibly influencing teacher judgment accuracy related to literacy have included students’ behavior, gender, and general abilities relative to their peers. Südkamp et al. described a study by Bennett, Gottesman, Rock, and Cerullo (1993), for example, in which teachers who perceived specific students as exhibiting bad behavior also predicted lower academic performance of the same students, regardless of the students’ actual academic skills. Such findings, however, have not been without contradictory results. Bates and Nettelbeck (2001), for example, investigated whether behavioral problems of students affected their teachers’ ability to accurately judge their reading abilities and found no statistically significant evidence of such bias.
Findings from investigations of a relationship between student gender and TJ accuracy have been mixed. According to Hoge and Coladarci’s meta-analysis (1989), student gender was not significantly related to judgment accuracy. Hinnant, O’Brien and Ghazarian (2009), on the other hand, found a “marginally significant interaction” (p = .08) between first grade teacher expectations and third grade reading achievement and students’ gender. In their study, teachers tended to underestimate the later reading achievement of minority males more than girls. In an earlier study by Beswick, Willems and Sloat (2005), student gender was also found to be significantly related to teacher judgment accuracy. In that study teachers underestimated the performance of boys, despite there being no actual difference between the performance of the boys or girls in the study.
Begeny, Krouse, Brown, and Mann (2011) examined teacher judgment across student ability levels. In their study, 27 first- through fifth-grade teachers made indirect concurrent estimates of the abilities of eight of their students on a broad range of literacy skills by responding to items on 5-point scale ranging from consistently poor to consistently successful. Sample items included “Please rate the student’s level of fluency during oral reading.” The researchers found that teachers in their study judged low- and average-performing readers less accurately than high-performing readers. This finding supported that of Hoge and Coladarci (1989) who also observed that students’ general academic ability may be a factor which influences the accuracy with which teachers judge student achievement. In a study focusing specifically on teacher judgment of reading, Coladarci (1992) found that teachers were substantially less accurate in judging the performance of low ability students (defined as reading one year below grade level) than they were in estimating the performance of students reading one year above grade level (62% accurate for low ability students versus 85% accurate for higher ability students).
Conflicting results, though, were reported in a dissertation by Martin (2005). In her study investigating the influence of student ability level on TJ, she found that teachers actually made more accurate judgments for lower achieving students. In her discussion section, Martin suggested that current educational policy has perhaps encouraged early elementary teachers to be more sensitive to the needs of low achieving readers.
In 2003, Hamilton and Shinn investigated what Nathan and Stanovich (1991, p. 7; cited in Meisenger & Bradley, 2009) termed the “red herring” of reading research literature – the existence of word callers – students who purportedly read accurately but are weak in reading comprehension. Studies of word callers have made an important contribution to the TJ literature by highlighting a particular area of concern – the distinction between estimating word-level reading accuracy and that of estimating reading comprehension and the extent to which conflation of the terms has muddied the interpretation of results of TJ studies. Hamilton and Shinn asked teachers to make concurrent direct judgments about student reading by first nominating word callers and similarly fluent peers for inclusion in their study. They found that although teachers perceived all the students they selected for the study as having similar oral reading fluency, in actuality, “teacher-identified word callers read significantly and considerably more slowly than their peers.” Such findings suggest that teachers may have difficulty accurately diagnosing and therefore treating the actual problems of some of their struggling readers. This particular finding is worrisome given the consensus, among researchers, that for children who experience literacy difficulties in the early grades (K-3), considerable problems exist at the word level (Washburn, Joshi, Binks-Cantrell, 2011).
Teacher Variables Mediating TJ Accuracy
Variability in judgment acumen has also led researchers to explore whether particular teacher characteristics are related to judgment accuracy. Education level and years of experience are among the differences that have been explored most frequently by a handful of researchers. Bates and Nettelbeck (2001), for example, investigated the possible mediating role of years of experience but concluded that experience did not seem to make teachers better or poorer judges of students’ reading accuracy or comprehension. Similarly, Begeny, Krouse, Brown, and Mann (2011) used chi-square tests to explore the relationship between accuracy and years of experience, grade taught, and level of education (M.A. or not) and found no significant differences across any of those teacher variables. Valdez (2013) also examined teacher experience as a possible moderator of the concurrent relationship between teachers’ judgment of reading skill and students’ performance on standardized tests of reading performance. He too found that neither years of experience nor educational attainment were a significant moderator variable.
One explanation for such findings may relate to the uncertain content of in-service professional development and graduate coursework that experienced teachers and teachers with higher levels of education might take to meet certification renewal requirements. In their text, Knowledge to support the teaching of reading Snow, Griffin and Burns (2005) emphasized that “the quantity and complexity of the declarative and practical knowledge teachers need is so great that it simply cannot be mastered adequately in the brief time available during a pre-service program” and as a result, novice teachers can at best be expected to “do no harm.” As for more experienced teachers, Snow has also asserted that while there is recognition that ongoing learning is as important for teachers as it is for medical doctors or car mechanics, the reality is that “nobody worries about what courses [teachers are] taking. . . the fact of the matter is, you just have to show that you’ve taken a course” (C. Snow, personal communication, January 2, 2016). In other words, although it is now common practice to require teachers to earn a master’s degree or take post-graduate credits, it may well be that even more experienced and more educated teachers have not been exposed to the particulars of a knowledge base that might inform and possibly improve their diagnostic perspicacity.
Task Variables That May Moderate Judgment Accuracy
Shulman’s definition of PCK included the ability to judge the difficulty level and demands of selected instructional materials. Doore (2010), in his study of preschool teacher judgment, found that judgment accuracy appeared to be related to item difficulty. That is, the pre-school teachers in his study were less accurate in judging student performance on the most difficult items on the Test of Early Reading Abilities – Third Edition Alphabet subtest (TERA-3) as compared with judging performance on easier items. Doore calculated an effect size of nearly +1.00 SD when judging easier vs. more difficult items on the TERA-3 as determined by frequency of correct answers by students. The meaning of effect size varies by context, but a standard interpretation offered by Cohen (1988) is that effect sizes greater than or equal to 0.8 are large, while those that are between 0.5 and 0.8 are moderate, and those less than 0.2 are small.
Empirical Gaps and Questions for Further Research
In summary, although accurate teacher judgment is an important component of teacher competency, research has raised questions about whether teachers’ estimation of students is always accurate enough to inform effective instruction. In the conclusion to their meta-analysis, Madelaine and Wheldall (2009) reminded the reader to consider the purpose of teacher judgment. It is important, they stressed, to think about what it is we want teachers to judge and why. To clarify: understanding what students understand is essential to planning effective instruction for all early readers and to planning targeted interventions for those who struggle. Südkamp et al. (2012) conjectured that teacher judgment accuracy may be associated with what they termed teachers’ expert knowledge. Expert knowledge has been described as including the ability to perceive meaningful patterns of information that are not noticed by others. In the case of spelling, researchers have offered a powerful lens through which to view student spelling errors. Doore’s finding invites further investigation of a possible relationship between item difficulty and teacher judgment accuracy. The proposed study will examine such a relationship in the context of spelling error analysis. Knowledge of developmental spelling theory allows teachers to discern important patterns of information which may drive more effective instruction and, in turn, increase student achievement. Few studies were located, however, which have investigated the mediating role of teacher knowledge on judgment of student achievement. No studies were located that explored teachers’ understanding of the particular complexities of spelling or the possible direct or indirect impact on judgment accuracy or student achievement in spelling.
A Knowledge Base for Early Literacy Instruction: Developmental Aspects of Reading, Writing, and Spelling
Measures of teacher knowledge have been used in many studies whose aim was to explore a relationship between teacher knowledge and student achievement. Intuitively, it was expected that greater teacher knowledge might result in greater student achievement. Some studies have reported modest positive relationships between teacher knowledge and student achievement (i.e., Bos, Mather, Narr, and Babur, 1999; McCutchen, Harry, Cunningham, Cox, Sidman and Covill, 2002; and Spear-Swerling and Bruker, 2004). These positive results notwithstanding, other studies have resulted in mixed findings (e.g. Carlisle, Correnti, Phelps, & Zeng, 2009 and Duggar, 2016). Such mixed results should not be interpreted to mean that teacher knowledge is irrelevant to student achievement; more likely, as Spear-Swerling and Cheesman (2012) asserted, such findings are evidence that “multiple factors influence student outcomes, and that relationships affecting reading achievement are complex” (p. 1693).
Piasta, Connor, Fishman, and Morrison (2009) were among the first to hypothesize that mixed findings on the relationship between teacher knowledge and achievement might be the result of over-simplistic conceptual models of the relationship between teacher knowledge and student outcomes. Piasta et al. hypothesized that “students’ literacy skill gains would not be predicted by teacher knowledge alone but by teacher knowledge as it informed classroom practices” (p. 228). A strength of the Piasta et al. study was that the research design included measures of how knowledge was enacted in the classroom. The results of their study confirmed that indeed, there were important interactions among the variables in the study. Specifically, student gains in word reading skills improved substantially only when teachers with higher knowledge spent more time on explicit decoding instruction. Conversely, the more time teachers with low knowledge scores spent on explicit decoding instruction, the weaker their students’ spring word reading scores were. In their conclusion, the authors call for further research which considers such complex relations among variables.
Current understandings of a developmental course in spelling skills are informed by what has been called the Rosetta Stone of spelling research — the seminal work by Charles Read (1971). Over time, Read’s findings have been replicated, refined, and embellished. Read observed that although individual children take somewhat different routes to becoming literate adults, they nevertheless pass predictable and recognizable landmarks along their journeys, sometimes called phases or stages. There is now consensus that American English speakers learning to spell, both children and adults, including those identified as learning disabled, pass through predictable phases/stages in quite similar sequences as they gradually move toward mastery of conventional spelling (Beers & Henderson, 1977; Ganske, 1999; Read, 1971 & 1975; Treiman, 1993; Zutell, 2008).
In the earliest stages, distinctions between children’s drawing and writing may be difficult for adult observers to discern. In Ehri’s research, this phase is termed pre-alphabetic; Ganske called it emergent spelling. Henderson and Templeton described the earliest stages as pretend writing. Regardless of the label, what characterizes this earliest phase is that prior to marshalling any true understanding of letter sound correspondence, young children can and do frequently refer to different parts of their own writing as letters or words vs. pictures or drawings. Even young children usually have had many chances to look at writing and to observe that different “writing” has different characteristics. Marks they refer to as writing tend to be smaller and darker than those they refer to as drawings which are often large and more colorful (Treiman, 2017). In this early stage, scribbles, letter-like characters and even numbers may be combined to form messages interpretable only by the author.
Later, children begin to use their knowledge of letters and letter sounds to write more conventional text. In this stage, beginning and ending consonant sounds are far more likely to be captured (rightly or wrongly) than interior sounds. Their attempts to map speech to print reveal their nascent understanding of the alphabetic principle. They often spell using letters whose names include the sound they are trying to represent. Henderson and Templeton termed this the Letter Name (LN) stage. In many cases, children’s use of phonetic properties of letter names explains what might otherwise perplex fully literate adults. Consider, for example, the misspelling, in the present study, by several students of the word yes as ues. Though children were not available for questioning, research has pointed out the logic of children using the /y/ sound at the beginning of the spoken name of the letter U as the logical basis for such reasoning (eg. Bear, Invernizzi, Templeton, & Johnston, 2012; Block & Duke, 2015; Henderson & Templeton, 1986).
Likewise, using H to spell the /ch/ sound at the beginning of a word like chip is logical if one’s theory of spelling development is grounded heavily in the idea of spelling as memorized visual representations, but makes equal sense from a letter name / phonetic spelling strategy. Considering that the sound of the letter h actually contains two distinct sounds: long /ā/ followed by a /ch/ sound and considering that the latter is the first sound in the word chip, such an ‘error’ is actually quite well reasoned. Similarly, white rendered yit also makes sense.
Patricia Kuhl (2010) has described children’s linguistic perception as “nothing short of rocket science.” Indeed children’s early spelling attempts are often evidence of quite sophisticated analysis and acknowledgment of very real differences that many adult native speakers of English have unwittingly learned to ignore. Research has shown that as young children attend to sound (though not exclusively to sound) in order to spell, they are quite attentive to aspects of articulation such as voicing, relative airflow restriction, and nasalization as well as coarticulation (the effects of adjacent sounds upon each other).
When administered by knowledgeable users, developmental spelling inventories offer a glimpse into children’s advanced phonological intelligence even at this relatively ‘rudimentary’ level. Children in this early stage, for example, are very sensitive to an organizing feature of speech called voicing. There are number of sounds in English which are identical in place and manner of articulation with the exception of the absence or presence of vibration of the vocal folds / vocal cords (Fromkin & Rodman, 1997). Such attunement prepares them to subtly alter the pronunciation of the plural morpheme commonly spelled with an s. When speaking, they closely attend to and accurately produce the last sound in dogs as /z/. As native speakers do, they hear and pronounce the last sound in cats, however, as being an /s/ sound (Gleason, 1978). When attempting to spell the two words, young children are apt to use different letters at the end of the two words, demonstrating their primary focus on spelling as an act of 1:1 phonetic transcription. Similarly, children in the early stages of encoding regularly have understandable difficulty spelling words whose oral production includes no discernable vowel sound, such as in the case in the words girl and hurt.
They notice that speech sounds affect and even change each other appreciably when articulated in particular sequences. Unlike the adults who worry over implied speech deficits suggested by spelling truck as chruk, they realize, intuitively, that mental and motor anticipation of the /r/ requires that native speakers use subtly different points and manner of articulation when pronouncing the first sound in truck vs tuck. They recognize without resistance the pleasing alliteration in the phrase choo-choo train. They recognize as ‘same’ the sounds in the last syllable of vacation and passion.
Their acute sensitivity to such articulation aspects also explains the significant differences in their ability to spell ‘easy’ words of similar look and length such as cap vs can. During speech, most of the consonant and vowel sounds of English include exhalation of air either relatively gradually or abruptly through the mouth. In other cases (i.e./n/, /m/, and /ng/), air leaves the lungs via the nasal passages, bypassing the oral cavity (Fromkin & Rodman, 1997). Native speaker intuition prepares children and adults to unconsciously anticipate such occasions and perform accordingly. When attempting to spell words whose articulation includes nasal exhalation, children may be more attuned to this feature than most adults. In spelling a word like can for example, they notice that the nasalization actually also affects the adjacent vowel, significantly altering its prototypical short ă quality in a way that often makes it unrecognizable to a child who has not been taught and may never be taught the letter that makes the nasalized /a/ sound. Similar difficulties with nasals have been frequently observed. Children able to spell words with final consonant clusters like last have great difficulty spelling final clusters that include nasals such as land probably because when you say the latter, the flap to the nasal cavity opens when the vowel sound begins and the vowel and nasal are articulated simultaneously (Ehri, 1987). Thus, though in the same position, the /n/ in land is presumably more difficult to perceive than the /s/ in last.
Ganske (2014) and others (eg. Schreiber and Read, 1980) have observed that inclusion of the preconsonantal nasal in fact often marks the transition between the qualitatively different strategies of phonetic / letter name spelling and that of using rules and visual patterns to encode words. Scharer and Zutell (2013) described kindergarten and first grade students in this stage as
still moving left-to-right across words, not attending to chunks and not seemingly capable of thinking about the vowel in the context of rules about what sort of vowel there is and what follows the vowel. As a practical example, they are, in this stage, oblivious to the notion of a magic e rule that can render vowel sounds long.
Perhaps coincidentally or perhaps not, according to developmental spelling research, this shift in strategies tends to occur when children are theorized, according to Piaget, to be transitioning from a preoperational stage to a concrete operational stage. The former, remember, was characterized as a period (ages 2-7) during which children are capable of symbolic thinking but also egocentric and apt to struggle with the idea of constancy and abstractions. In the concrete operational stage (ages 7-11), children begin to think logically and become capable of reasoning from specific information to a general principle, but still struggle with abstractions.
In the Within Word stage of development, spellers gradually come to understand that some spellings are based on rules or visual patterns that do not correspond directly to phonemic sequences. Such a “new perspective on words requires a degree of cognitive maturity,” explained Henderson (1986, p. 309). A child may notice, for example, that long vowel sounds are often encoded with a word final ~e and apply this discovery by overextending the rule. They also begin to be capable of spelling by analogy but with many limitations. Scharer and Zutell (2013) detailed, for example, a study involving first through fifth grade children being taught to use analogy as a strategy for spelling. Notably, they said, first graders in the study were largely unable to learn many of the anchor reference words even with instruction. Only among students in second grade was a significant positive effect observed, but what was striking, they emphasized, was not that second graders could be taught to use analogies to spell but “how much effort the researchers had to make to get them to that point,” (p. 15) a point they may well have reached with or without intensive instruction, given more time. Such an observation is compatible with Piaget’s idea that development precedes learning and calls into question the practice of whole class spelling lists that do not take into account an individual child’s current abilities or capabilities.
It isn’t until learners reach the next Syllable Juncture stage, (often in the intermediate grades) that students come to understand how to coordinate multiple aspects of sound, meaning (e.g. tense and plurality) and visual spelling rules to accurately write presumably familiar words. Consider for example, the layers of knowledge required to spell the word clapped accurately.
In the last stage, which Henderson labeled Derivational Relationships, learners come to realize that despite variations in sound across related words like sign and signature, spellings also serve as a means of visually preserving meaning relationships.
Other Models of Spelling Development
A separate but complementary four-phase model of reading and spelling has been proposed by Ehri (2005). In what she termed the pre-alphabetic phase, children attend to visual features like font or color to “read” environmental print like McDonald’s or Taco Bell signs. The same children when presented with the same words presented out of context would likely find the words unrecognizable. In this stage, a child may look at a tube of Crest® in the bathroom and conclude that toothpaste is spelled CREST (Tolman, 2010). Ehri termed the next phase an early alphabetic phase and described it as being characterized by partial phonemic awareness. Word recognition in that stage, she explained, is constrained by the child’s inability to successfully segment words into all the phonemes they contain. Instead, the child may rely on first or last letters alone as primary cues. The same limitations, she said, affect spelling ability. In the early alphabetic stage, a child may write letters for the dominant sounds but not all sounds in words. Furthermore, letters selected to represent sounds may be tied closely to the letter’s name rather than the letter’s sound. Children in this phase may use the letter Y to spell the /w/ sound, for example, because the name of the letter Y begins with the /w/ sound.
A later, more secure alphabetic phase, Ehri posited, is characterized by complete phonemic awareness as well as growing understanding of the morphophonemic nature of English orthography. In other words, the understanding that words’ spellings contain both pronunciation and grammatical cues. For example, children may notice and begin to consistently spell regular plural and regular past tense with ~s and ~ed, respectively, despite variants in pronunciation. The later phase is also characterized by a growing sight-word vocabulary. In the last, consolidated alphabetic phase, Ehri said, readers become both accurate and fluent. In explicating her choice of the word phase to label her model, Ehri explained that the use of the term stage may denote a strict view of development in which one type of word reading occurs at each stage, and in which mastery is seen as a prerequisite for movement to the next stage. Neither such stage models nor her own phase theory model, she clarified, are actually so rigid.
In summary, research on the developmental aspects of both reading and spelling is well established. Researchers studying the spellings of young spellers have consistently observed a gradual shift in the knowledge sources called upon by children as they put words on paper. Henderson and Templeton described these various sources as three “ordering” principles of English spelling and which seem to drive children’s spellings in predictable sequence. Beginning with alphabetic or letter name spelling, children progress to noticing and extending within-word spelling patterns and finally they incorporate meaning as a strategy when spelling. It is important to clarify that although researchers have recorded the age ranges and grade spans at which a majority of children can use different spelling strategies, such data is descriptive only. As all teachers are well aware, different children proceed at different rates along the same paths. In the last twenty years, a complementary body of research on teachers’ familiarity with this knowledge base has arisen.
Research on Teacher Knowledge
Direct Measures of Teachers’ Knowledge of the Linguistic Foundations of Early Literacy
For more than two decades, researchers have maintained that teachers of early literacy need high levels of understanding of the linguistic foundations of early reading and other literacy related content knowledge (Moats, 2009). Given the contribution of the alphabetic principle to successful reading and the links among phonology, orthography, and meaning, particularly in the beginning stages of literacy, teachers’ own knowledge of the alphabetic principle and of the regular and irregular mappings between language and print have frequently been the object of reading research. Such knowledge, Moats and others have emphasized, is not acquired casually and is not a natural consequence of mature reading ability.
Spear-Swerling and Cheeseman (2012) have pointed out that because of changes in education policy, specifically the widespread shift to Response to Intervention (RtI) models for dealing with struggling learners, a requirement exists for both general and special educators to have a strong grasp of such knowledge. Although there is no single widely practiced model of the RtI process in reading, it is generally defined as a three-tier model of supports that requires that classroom teachers share with specialists the responsibility for delivering high quality basic classroom instruction as well as for providing supplemental research-based interventions to students with the most serious reading difficulties. Without preparation in such foundational knowledge, teachers may unintentionally provide inadequate instruction for children, for example by unwittingly choosing inappropriate examples of words for instruction or by providing feedback that lacks insight about reasons for errors.
In the case of spelling, Rebecca Treiman (1985) has cautioned, for example, that a child who gives the “wrong” answer to a question such as “Does chair begin with t?” does not necessarily lack metalinguistic skill. On the contrary, as explained earlier, children are often aware of phonetic details that may be inaccessible to adults, but children have not yet learned which features of sounds are represented in the English spelling system.
Indirect Assessments of Teacher Knowledge
Other studies have explored teacher knowledge more indirectly, relying, for example, on document analysis (e.g., syllabi from university courses for pre-service teachers; required textbooks for the same courses) to make inferences about the extent to which pre-service teachers had been exposed to topics like phonics and phonological awareness (Hess, Rotherham & Walsh, 2004; Walsh et al., 2006). After comparing the contents of hundreds of syllabi and course texts with recommendations of the National Reading Panel’s report (2000), Walsh et al. (2006) concluded that pre-service teachers did not appear to have received preparation in key areas, and in many cases, were taught philosophies of teaching reading which stood in direct contrast to the results of scientifically based reading research. Although the methodology and therefore the conclusions reached by Walsh et al. have been controversial, other investigators, discussed below, have reached similar conclusions using other methods.
Rigden (2006) was commissioned by the National Council for Accreditation of Teacher Education (NCATE) to investigate how well, and in what ways, its expectations for teacher knowledge and skills were aligned with the research base. In her study, Rigden examined state licensure tests and their coverage of the most important insights gleaned from two decades of reading research and concluded that “it is quite possible – maybe even probable – that candidates can be licensed to teach elementary students without demonstrating their knowledge of essential components of effective reading instruction derived from research” (p. 6).
Assessment of Teacher Knowledge of Spelling
Few studies have targeted (directly or indirectly) teachers’ content knowledge as it specifically relates to spelling. One of the earliest tests of PCK related to spelling was carried out by Moats (1994). Moats developed a knowledge survey designed to tap spelling and reading related knowledge of more than fifty teachers enrolled in a course she was teaching. She discovered what she described as “insufficiently developed concepts about language and pervasive conceptual weaknesses in the very skills that are needed for direct, language-focused reading instruction” (p. 91). As examples she cited that just 30% could explain the y to i spelling rule or when to use ck; 20% could explain the rule for doubling m, and only 10% could correctly identify the third speech sound in thank.
In a more recent study by Moats (2009), she reported on the results of an administration of a newly designed knowledge survey administered to more than 100 primary level teachers in Utah and Florida. An appendix to that study provided multiple examples to support Moat’s conclusion that there were widespread and surprising gaps in the groups’ overall understanding of oral and written language concepts. For example, only 52% of test takers could select the correct answer (indicated by italics) to the following question:
The /k/ sounds in lake and lack are spelled differently. Why is lack spelled with ck?
a. The /k/ sound ends the word.
b. The word is a verb.
c. ck is used immediately after a short vowel.
d. c and k produce the same sound.
e. There is no principle or rule to explain this.
In another study, Carreker, Joshi, and Boulware-Gooden (2010) considered potential relationships between teacher knowledge and ability to analyze students’ spelling errors and select appropriate instructional activities based on such analysis. Knowledge was measured by a 30-item test that asked participants to count syllables, phonemes, and morphemes. The spelling instruction assessment (SIA) was developed by the first author to assess participants’ ability to use spelling errors to identify a student’s underlying difficulty and plan appropriate instruction. The measure consisted of 12 items which assessed whether participants could choose from among alternatives the one instructional activity that directly targeted a demonstrated spelling problem. Phoneme, syllable, and morpheme counting were used as independent variables to predict the outcomes on the SIA. They found that participants who demonstrated the greatest knowledge of phonemes, syllables, and morphemes were better able to identify the most appropriate activities. An acknowledged limitation of the study, however, was that reliability for the SIA measure was only moderate (Cronbach’s alpha = .64).
McNeil and Kirk (2014) asked teachers in New Zealand to evaluate their preparedness to teach spelling effectively and reported that the majority of teachers who participated in the survey (69%) felt that they had not received adequate preparation to teach spelling as part of their teacher training program. The author suggested that such results may imply a lack of focus on teaching skills that underlie spelling success in initial teacher education.
Alatalo (2015) also examined the content knowledge for spelling of over 250 literacy teachers in grades 1-3 in Sweden. The purpose of that study also included describing teacher knowledge in terms of knowledge of code concepts and language structures. The teacher knowledge survey (TKS) used was based on Moats’ 1994 survey but adapted to take into account differences in Swedish language structure and orthography. The survey included 43 total items with 11 related to spelling rules and conventions. Similar to the Moats’ study, participants were asked, for example, to explain the rule (in Swedish) for doubling consonants. Alatalo described the aggregate results as generally low; half the teachers received just partial credit and another 18% either responded incorrectly or wrote “I don’t know.” Just 32% received full points.
In conclusion, researchers have found wide variation among teachers in foundational knowledge deemed essential for early literacy instruction by many in the field. Moats (2000) suggested that teachers who are well versed in foundations are more apt to understand why students make the errors that they make. Such knowledge may enable teachers to judge not only what a particular student knows but also what he or she needs to know next about the relationship between speech and the printed word. As Carreker, Joshi, and Boulware-Gooden, (2010) argued in their study of spelling related teacher knowledge, such domain specific content knowledge is important for teachers of all students, but especially for teachers of students with dyslexia or other language-based learning disabilities.
In summary, there is consensus in the extant literature about the importance of reasonably accurate teacher judgment as well as about the importance of content knowledge (CK) and pedagogical content knowledge (PCK) for effective teaching. A logical next step is to investigate an interactive relationship among these variables. The proposed study attempts to explore such a relationship in the specific context of teacher knowledge and judgment of spelling.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: