A Study Of Language Acquisition In Children English Language Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The study of language acquisition attempts to explore children's language in terms of the structural characteristics of language production as well as differences in the acquisition process. Key paradigms explored are the nativist and connectionistic paradigms including aspects such as modularity, innateness and domain specificity. Research has been drawn from the relevant fields to enable a balanced evaluation of language acquisition, from these studies are drawn conclusions and criticism supported by the corresponding paradigms. In conclusion, child language acquisition must be at least in part due to innate properties, while also suggesting that nurture plays its part. To what extent these influence language acquisition is surely the focus of future research.

Keywords: Language acquisition, Universal grammar, Nativism, Connectionism.

To enable a critical evaluation of child language acquisition and if any acquisition could be due to an innate property, it would therefore be necessary to evaluate the competing theories of language acquisition. These theories include both nativist and connectionistic paradigms of the language acquisition, while also addressing the issue of modularity, innateness, and domain specificity. Any criticisms or support made of any of the theories mentioned will be gathered from neurobiological evidence and evidence from language acquisition research. Upon the research displayed a conclusion was formulated which stated language acquisition in children is in part innate but also potentially open to a nurturing element, which raises the further question of to what extent does nature or nurture influence child language acquisition.

Nativism is sited in grammatical, especially syntactic acquisition; however innate mechanisms and knowledge have also been proposed for what in the Chomskyan tradition was considered to be the only component of language that needed learning as oppose to acquiring, namely the lexicon. Noam Chomsky, the notion of 'nativism' has been very prominent in the study of language acquisition. The gist of the argument that Chomsky (1965) put forth was that at least some aspects of language were innate: a child is born with a Language Acquisition Device which permits her to acquire the grammar of the ambi­ent language. In later formulations (Chomsky 1986), the child is said to have an innate idea of Universal Grammar (UG), i.e., the universal principles that determine the form of any human language, and the parameters that determine the highly restricted variation between languages. The main arguments leading to this proposal relate to the issue of the 'learnability' of language. The central and traditional 'poverty of the stimulus' argument stipulates that there is such an enormous discrepancy between the highly abstract grammatical knowledge that the child has to acquire and the underspecified nature of the phonetic strings that the child hears, that there must be an innately guided discovery procedure. The child's search space must be restricted in some way, otherwise it is inconceivable that a child can dis­cover the grammar of her language in such a short period of time. There are various answers to this question. One possible explanation is to assume that acquisition requires maturation: not all components of Universal Grammar are available to the child from the start (Radford 1988, 1990), and it requires neurological maturation for this to happen. Another explanation stipulates that all required knowledge is present and available from the start of the learning process at birth, but interdependencies between grammatical parameters make it a protracted process: parameters are ordered and the setting of parameters follows a certain ordered path, in the sense that the fixing of one parameter is dependent upon the prior fixing of another parameter.

Modularity refers to the compartmentalization of knowledge in the mind. A module is a specialized, differentiated and encapsulated mental organ, which, according to Fodor (1983), has evolved to take care of specific knowledge that is of crucial importance for the species. Language is one of these hardwired cognitive systems that is crucial for the spe­cies and has therefore evolved into a separate module. In essence this means that language is independent from other cognitive systems, and within the linguistic modules various modules (syntax, phonology) also operate independently. Fodor (1983) pinpoints various criteria for distinguishing hardwired, independent modules from learned behaviors: modules process information in a characteristic. However, this type of information processing is also characteristic of learned behaviors that are largely automatic when reaching behavioral mastery. But modules also have a specific biological status: modules are built-up in a characteristic sequence and break-down in a characteristic way, and they are localized in the brain. Hence the language module is a kind of 'mental organ'.

The issue of modularity is closely connected to the domain-specificity of linguistic processing. For language acquisition, this question is as follows: Is language development a function of domain-specific or domain-general processes and representations? In other words, does a child use the same processes and/or knowledge structures for acquiring lan­guage as well as for learning other cognitive skills? Or does language acquisition require a dedicated module, a 'mental organ'? This question is high on the research agenda of the community, and input from the neurolinguistic front is currently throwing important light on the issue.

Liz Bates (1994, see also Bates 1999; Bates et al. 1992) argues that in fact three issues are confounded in the debate: innateness, localization, and domain specificity. As to innate­ness, the claim is that something about language (acquisition) is innate - a claim which has to be true, since we are the only species to acquire language in its fullest sense (symbolic lexicon and syntax, Deacon 1997). As to localization: the claim is that there are specific areas in the brain that are dedicated to language processing. This claim also appears to be uncontroversial, judging from the neurolinguistic literature, though there is accumulating evidence of brain plasticity and reorganization when the default conditions do not hold. Bates (1994: 136) argues: The real debate revolves around the mental organ claim. Are the mental structures that support language 'modular', discontinuous and dissociable from all other perceptual and cognitive systems? Does the brain of a newborn child contain neural structures that are destined to mediate language, and language alone.

Proponents of universal built-in constraints in lexical acquisition have made specific assumptions about linguistic learning mechanisms that are supposed to help children cope with the inductive problem involved in learning novel nouns. According to this view, chil­dren have innate lexical biases such as the whole object constraint, the taxonomic bias and the shape bias (Golinkoff et al. 1994; Markman 1994; Woodward & Markman 1998). Together these constraints predict that a child encountering a new noun will assume that its label refers to the whole object rather than to its parts or to properties associated with it; that there are other whole objects sharing the same category with it; and that the shape rather than the size or texture of a count noun will determine what other nouns will be regarded as sharing the same category. An alternative account of such mechanisms is pro­posed by Bloom (1994), who argues that syntactic distinctions of mass vs. discrete reference of nouns correspond to aspects of abstract cognition, and that young children are able to exploit such innate syntax/semantic mappings in order to learn new words, and specifically, types of new nouns.

Recent work on the early acquisition of noun reference denies the existence of innate lexical biases to explain how children handle the almost infinite number of pos­sible interpretations logically possible for every novel noun (Landauer & Dumais 1997; Smith 1998; Tomasello et al. 1996). These studies suggest that initial lexical learning is guided by cognitive knowledge, parental guidance, world knowledge, and by attending to language-specific properties of words provided in the input. In a series of crosslinguistic studies, Gathercole & Min (1997), Gathercole, Thomas & Evans (2000) and Gathercole et al. (1999) also propose that children's lexical biases are a symptom of their reliance on regularities they discover about their own particular language in interaction with linguistic and cognitive factors.

Instrumental in this interest in data-driven approaches to language acquisition was the rise of connectionism, marked by the highly influential work of McClelland & Rumelhart (1986). It culminated in a volume written by Elman, Bates, Johnson, Karmiloff-Smith, Parisi and Plunkett in 1996 entitled Rethinking innateness, in which the ideas about innate linguistic knowledge and processing were questioned, critically analyzed and put into an empiricist perspective: not a simple denial of nativism, but a simple denial of a nativist learning theory without a learning subject as it figures in a Universal Grammar approach.

Investigations of children's speech perception in the first year of life indicate that they become attuned to the regularities in the language that they hear from very early on (Jusczyk 1997; Werker & Tees 1999; Kuhl et al. 1992). One of the most convincing findings in this respect brings grammar learning into the picture. Saffran, Aslin & Newport (1996) exposed eight-month-olds to 'words'. The stimuli were presented while the children were playing with toys on the floor. The words were pronounced by a monotonous synthesized female voice at a rate of 270 syllables per minute. After they had heard the 'words' for two minutes, the children were tested in the following way: either the same stimuli were presented to them, or they heard exactly the same syllables but in different orders, thus breaking up the statistical structure (defined by conditional probabilities) of the original 'words'. The result was that the eight-month-olds were able to detect the regularities in the input: they discriminated reliably the 'words' that obeyed the statistical regularities in the 'words' they had heard from the 'words' they had not heard before. Hence, even after two minutes of exposure, babies are able to induce the statistical regularities in the input without reinforcement and without paying particular attention to the input. These findings were replicated: youngchildren were shown to be able to induce 'implicitly' the finite-state grammar underlying sequences of events (such as the syllables making up words in the Saffran et al. experiment), and they were shown to be able to do that with sequences of tones (Saffran et al. 1999) and visual displays (Kirkham et al. 2002).

Creativity or generatively appears to be a key for empiricist approaches to acquisition (or to language processing in general). If children rely on their memory of input patterns, how is generalization possible at all? Indeed, the standard approach emphasizes the observa­tion that the grammar, though finite, can be used to generate an infinite set of sentences, and this capacity to generalize has provided the classical evidence that knowledge of a language involves rules (see Berko-Gleason 1958; Pinker 1999). The controversy over this issue has not yet been resolved: it has produced an enormous amount of research investi­gating, for instance, the use of rules in a symbolic dual-route model of morphology versus the exclusive use of associative memory in a single-route model (see e.g., Plunkett 1995 for a selective review, and a theme issue of Cognition on 'Rules and Similarity in Human Thinking', vol. 65 (2/3)). A crucial notion that has gained credibility in this respect is 'analogy': connectionists have brought analogical learning under the spotlight; several operation­alizations have consequently been proposed and applied to language acquisition and processing (Broeder & Murre 2000).

The notion of bootstrapping in acquisition research has been used to describe how children use correlations between different aspects of language to infer structure. Con­nectionist approaches provide a generalization and formalization of this notion, which is seen to play a key role in the child's entry into language, providing the basis for identifying words, their meanings and grammatical functions, as well as the kinds of structures they participate in (Seidenberg 1997). Bootstrapping is a mechanism proposed to deal with the problem of how the child 'breaks' into a particular linguistic system. Assuming the child has a notion of 'objects in the world', she may use that information as an entry into the domain of parts-of-speech or lexical categories: for example, words referring to objects are of a particular kind termed 'nouns'. Thus, on the basis of already existing knowledge and processing capacities, the child uses that information in the linguistic and non-linguistic input to determine the language-particular regularities that constitute the grammar and the lexicon of her native language (Weissenborn & Höhle 2001). As the example above of 'objects in the world' and 'nouns in the language' already implies, one of the main problems with bootstrapping is that most of the time there is no completely transparent interface between the domains at hand: on the one hand, nouns do not always refer to objects alone, while on the other hand, not only objects are referred to by nouns. But: no matter how imperfect the parallelism between two knowledge domains is, bootstrapping is considered to be a useful initial aid for the language learning child. Steven Gillis & Dorit Ravid

Pinker (1984, and 1989 for a further elaboration) proposes 'semantic bootstrapping' as a mechanism children use for breaking into the syntactic structure of the language. The idea is quite simple: the child hears adults talk, and because she understands the scene they are talking about, she can start figuring out what the language structure is like. In other words, cognitive capacities are invoked as a bootstrap into syntactic structure. For instance: if the child witnesses a scene in which an actor is performing a particular action ('John is running', 'Mary is cleaning', 'The baby is crying',…), she may notice that the actor is the first one to be mentioned, and that the action follows. And witnessing scenes in which utterances occur such as 'John gives a book to Mary', 'The baby throws a bottle on the floor', etc., the child may notice that the thing something happens with is mentioned after the actor and the action. In so doing, the child may eventually hit upon the general­ization that the relationships she understands are expressed by the order of constituents, and that the order is SVO in the language she hears. Thus, Semantic Bootstrapping claims that "the child uses the presence of semantic entities such as 'thing', 'causal agent', 'true in past,' and 'predicate-argument relation' to infer that the input contains tokens of the cor­responding syntactic substantive universals such as 'noun', 'subject', 'auxiliary', 'dominates', and so on" (Pinker 1987: 407). Pinker invokes an elaborate set of innate concepts and devices in order to be able to make the bootstrapping approach work; he argues that the child is innately equipped with a large number of the components of grammar: syntactic categories like 'noun' and 'verb' are innate, and furthermore Pinker assumes that there are innately given 'linking rules' that link those syntactic categories to thematic categories such as agent, theme, etc.Language acquisition

Moreover, there is a developmental question that remained untouched, namely if and how the bootstrapping strategies and their interrelations change over time. It may well be that the bootstraps are useful for initially 'cracking the linguistic code' and become less useful later on. This type of foregrounding specific types of bootstraps is to be expected, given the expanding linguistic and nonlinguistic knowledge base of the child. Moreover, changes in the bootstrapping capacities of the child may also be the result of changes in her information processing capacities, such as changes in memory and attentional resources (Weissenborn & Höhle 2001: viii), as indicated by the frequency effects for phonotactic patterns noticed in prelinguistic infants (Jusczyk et al. 1994).