This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The first step in any attempt to co-ordinate and rationalise scientific terminology would seem to be to come to some agreement about the general signification of each term we use, and then, after unhurried discussion , to get that signification by means of precise definitions in order to safeguard it against shift and change.
In order to tell the significance of a technical term and relationship between word and a thing, we have to leave the technical nomenclature. We know very well that the three sounds we use in speech to refer a gull which is a long-winged, web-footed, pearl grey bird form an arbitrary or conventional symbol. The bird, the living creature that we see with our eyes, we may call the referent, and the picture of it that we have in our mind as we speak, whether a memory picture or one actually seen at the moment, may be called the image.
The writer brings out a simple triangle diagram to represent these three entities; symbol and referent are connected with dotted lines to remind us that there is no direct connection between symbol and referent. The connection between word and thing is always arbitrary and conventional. It corresponds to nothing in the external world. Our mental picture (or image) of the living bird (or referent) is symbolised by the word [É¡ÊŒl]. That is all.
The triangle diagram given by the writer is [image on top and symbol and referent at two sides of bottom] is very useful to be kept in mind but we should not forget that it has only a limited application due to three reasons. First, psychologists have discovered that all people do not think in or through images. They do not always see with the mind's eye, but they may often think through movements, gestures, ritual acts, and in many other ways. It is certainly possible for highly intellectual persons to think only in words and therefore for them this symbol-image-referent diagram is irrelevant. Mental images always accompany the simplest processes of cognition.
Secondly, such a straightforward symbol-image-referent relationship will normally apply only to concrete substantives. Abstractions, attributive adjectives and verbs will have notion-spheres or spheres of reference rather than individual referents. Thirdly, there are many words which may be said quite crudely to have no meaning in themselves. Such are operators and interjections. Now operators are all the little forms like articles, prepositions, conjunctions and conjunctive adverbs-like the, on, than and if- which perform syntactic functions, linking together parts of sentences or showing the relationships between substantives or between substantives and verbs within sentences.
This term operator is borrowed from mathematics. Operators are sometimes called structure-words by writers. Exclamations like Aha!, Alas! Or Hurrah! are meaningless. They are merely emotive or evocative noises expressing subjective feelings of surprise, triumph, mockery, or irony; of grief, pity, or disillusionment; and of approbation, exhilaration, or exultation, respectively.
Even though the writer's triangle diagram has limited application, we should never lose sight of it in our endeavours to understand the spectrum-like mutations of the form-meaning parallelism which present themselves at every level of language study, since, try as we will, we may experience endless difficulty in dissociating word from thing. Like a small boy who asserted indignantly that 'a pig is called a pig just because it is such a dirty animal', or like the superstitious person who cherishes belief in the magic of spells, we are all too prone to assume some necessary connection and to tie up the word with the thing either by long use or from sheer thoughtlessness.
What is there in a name? The red coloured flower with thorns in its twigs is called ROSE, even if we call it by any other name, it will smell sweet. Its smell is both sweet and varied for no other has responded more readily to the skill of the hybridist and cultivator. If we love roses, and if we can name accurately all the lovely varieties that grow in our gardens, we may well pride ourselves on knowing all about them. In fact, of course, we may know very little about their origins, their habitats, and their ways of growth.
Symbol, image and referent may be regarded as three distinct units but they should not be taken as trilateral entities all on the same level. The gull's cry is the stimulus and our utterance-if we say something to ourselves or others about the bird- is the reaction or response. That reaction is one co-ordinated movement of mind and that, in the last analysis, is what understanding is. Comprehension is a kind of complex awareness. If we use the triangle diagram of writer with the notion of complex awareness constantly in mind, we may find it very helpful in our study of semantic principles.
Till now we have observed meaning only incidentally for the simple reason that we have been concerned with classification. In their well known discussion on linguistic analysis, linguist Bloch and Trager made their own attitude towards the exclusion of meaning quite clear. Although it is quite important to distinguish between grammatical and lexical meaning, and necessary in a systematic description of a language to define at least the grammatical meanings as carefully as possible, all our classifications must be based exclusively on form- on differences and similarities in the phonemic structure of bases and affixes, or on the occurrences of words in particular types of phrases and sentences. In making our classifications there must be no appeal to meaning, to abstract logic, or to philosophy.
Bloch expressed this same principle of 'excluded meaning' in much forceful terms: The basic assumptions that underlie phonemics, we believe, can be started without any mention of mind and meaning; but meaning, at least, is so obviously useful as a short cut in the investigation of phonemic structure- one might almost say , so inescapable that any linguist who refused to employ it would be very largely wasting his time. The parts of speech for example, in any new language under examination, should be determined either by their inflexions or, completely uninflected, by their syntactic function and not by the real or fancied significations which they are deemed to express or by some preconceived framework of universal grammar. The only useful generalisations that we may make about any language are inductive ones, based upon close observation. Features which we assume to be universal may be lacking in the very next language that we encounter.
It is important to note that this principle of excluded meaning applies only to linguistic classification and analysis. It does not apply to language itself. Every segment of every effective utterance has some degree of meaning at all levels, whether phonological, morphological, or syntactic.
Now lets deal with semantic value, lexical meaning, and contextual sense. A phoneme has semantic value in as much as it is a linguistic feature differentiating form from form. The initial voiced plosive [g] in [gÊŒl] has no meaning in itself, but because it marks off gull from dull, bull, lull and mull, it possesses semantic value. In other words, a phoneme is a distinctive feature, since it distinguishes symbols (and therefore referents) as the same or not the same. So, to give another very simple example, the final post dental plosive [t] in [lukt] has no meaning in itself, but because it differentiates the past tense form (I) looked from the present (I) look, it possesses semantic value, a different value, it may be observed, from the initial [g] in [gâˆ†l] because it is not only a phoneme but also a bound morpheme, indicating the inflexion of the past tense. We pass from phonemics, to morphonemics. Further, as we proceed from morphophonemics, and morphology proper, to syntax, so we observe a rise in the scale of semantic values. Affixes have greater values than inflexions and free morphemes have greater values still. We see this clearly as we pass from book-s to book-ish. Nevertheless, because the suffix -ish is a bound morpheme, we state that it has only semantic value in our scheme of things, and not full lexical meaning. As we cross the boundary separating bound forms like the -ish in bookish from free forms like the case in bookcase, so we pass from restricted semantic value to independent lexical meaning.
The important parallelism between the formal and the semantic aspects of language has an interesting bearing on that laudable proposal, made from time to time by exuberant reformers, namely, the dictionaries of morphemes should be organized for all the great languages of the world. Such dictionaries, they claim, would be of immense value and help to comparatists, etymologists and linguistic analysts. The dictionary or lexical meaning of a word is its signification detached from any particular context and, because detached from the living flow of speech, therefore dead, if not embalmed. In this respect a dictionary may be likened to a cemetery or abode of the dead. There in decorous order lie the bodies of words with their several epitaphs, waiting to be raised to life again on the tongues of men.
As for lexical meanings, the simplest words are manifestly those which symbolize single things or concepts, like proper names. These may be described as monosemantemic, and therefore unambiguous. Proper names bear the simplest type of symbolic meanings. At the same time, it is worth observing that the name of a living person, say Tenzin, is both a symbol by which we think of him and also a sign by which we signal to him. A sign is any mark or gesture conveying information to the beholder: for example, a ring seen round the moon is a sign of rain. A symbol is a special kind of sign intentionally selected or designed to stand for something else: for example, a ring worn round the finger is a symbol of betrothal.
A linguistic theory that investigates word meaning. This theory understands that the meaning of a word is fully reflected by its context. Here, the meaning of a word is constituted by its contextual relations. Therefore, a distinction between degrees of participation as well as modes of participation is made. In order to accomplish this distinction any part of a sentence that bears a meaning and combines with the meanings of other constituents is labeled as a semantic constituent. A semantic constituent that cannot be broken down into more elementary constituents is labeled a minimal semantic constituent.
The logical and conceptual theory is an effort to explain properties of argument structure. The assumption behind this theory is that syntactic properties of phrases reflect the meanings of the words that head them. With this theory, linguists can better deal with the fact that subtle differences in word meaning correlate with other differences in the syntactic structure that the word appears in. The way this is gone about is by looking at the internal structure of words. These small parts that make up the internal structure of words are referred to as semantic primitives.
A perennial problem in semantics is the delineation of its subject matter. The term meaning can be used in a variety of ways, and only some of these correspond to the usual understanding of the scope of linguistic or computational semantics. We shall take the scope of semantics to be restricted to the literal interpretations of sentences in a context, ignoring phenomena like irony, metaphor, or conversational implicative.
A standard assumption in computationally oriented semantics is that knowledge of the meaning of a sentence can be equated with knowledge of its truth conditions: that is, knowledge of what the world would be like if the sentence were true. This is not the same as knowing whether a sentence is true, which is (usually) an empirical matter, but knowledge of truth conditions is a prerequisite for such verification to be possible. Meaning as truth conditions needs to be generalized somewhat for the case of imperatives or questions, but is a common ground among all contemporary theories, in one form or another, and has an extensive philosophical justification.
A semantic description of a language is some finitely stated mechanism that allows us to say, for each sentence of the language, what its truth conditions are. Just as for grammatical description, a semantic theory will characterize complex and novel sentences on the basis of their constituents: their meanings, and the manner in which they are put together. The basic constituents will ultimately be the meanings of words and morphemes. The modes of combination of constituents are largely determined by the syntactic structure of the language. In general, to each syntactic rule combining some sequence of child constituents into a parent constituent, there will correspond some semantic operation combining the meanings of the children to produce the meaning of the parent.
A corollary of knowledge of the truth conditions of a sentence is knowledge of what inferences can be legitimately drawn from it. Valid inference is traditionally within the province of logic (as is truth) and mathematical logic has provided the basic tools for the development of semantic theories. One particular logical system, first order predicate calculus (FOPC), has played a special role in semantics (as it has in many areas of computer science and artificial intelligence). FOPC can be seen as a small model of how to develop a rigorous semantic treatment for a language, in this case an artificial one developed for the unambiguous expression of some aspects of mathematics. The set of sentences or well formed formulae of FOPC are specified by a grammar, and a rule of semantic interpretation is associated with each syntactic construct permitted by this grammar. The interpretations of constituents are given by associating them with set-theoretic constructions (their denotation) from a set of basic elements in some universe of discourse. Thus for any of the infinitely large set of FOPC sentences we can give a precise description of its truth conditions, with respect to that universe of discourse. Furthermore, we can give a precise account of the set of valid inferences to be drawn from some sentence or set of sentences, given these truth conditions, or (equivalently, in the case of FOPC) given a set of rules of inference for the logic.
Semantic interpretation is an important component in dialog systems. It is related to natural language understanding, but mostly it refers to the last stage of understanding. The goal of interpretation is binding the user utterance to concept, or something the system can understand. Typically it is creating a database query based on user utterance Semantic properties or meaning properties are those aspects of a linguistic unit, such as a morpheme, word, or sentence, that contribute to the meaning of that unit. Basic semantic properties include being meaningful or meaningless - for example, whether a given word is part of a language's lexicon with a generally understood meaning; polysemy, having multiple, typically related, meanings; ambiguity, having meanings which aren't necessarily relatedÂ ; and anomaly, where the elements of a unit are semantically incompatible with each other, although possibly grammatically sound. Beyond the expression itself, there are higher-level semantic relations that describe the relationship between units: these include synonymy, antynomy, and hyponymy.
Besides basic properties of semantics, semantic property is also sometimes used to describe the semantic components of a word, such as man assuming that the referent is human, male, and adult, or female being a common component of girl, woman, and actress. In this sense, semantic properties are used to define the semantic field of a word or set of words. In psychology, semantic memory is memory for meaning - in other words, the aspect of memory that preserves only the gist, the general significance, of remembered experience - while episodic memory is memory for the ephemeral details - the individual features, or the unique particulars of experience. Word meaning is measured by the company they keep, i.e. the relationships among words themselves in a semantic network. The memories may be transferred intergeneration ally or isolated in a single generation due to a cultural disruption. Different generations may have different experiences at similar points in their own time-lines. This may then create a vertically heterogeneous semantic net for certain words in an otherwise homogeneous culture. In a network created by people analyzing their understanding of the word (such as Wordnet) the links and decomposition structures of the network are few in number and kind, and include "part of", "kind of", and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and support vector machines as well as natural language processing, neural networks and predicate calculus techniques.
The formal study of semantics intersects with many other fields of inquiry, including lexicology, syntax, pragmatics, etymology and others, although semantics is a well-defined field in its own right, often with synthetic properties. In philosophy of language, semantics and reference are related fields. Further related fields include philology, communication, and semiotics. The formal study of semantics is therefore complex.
Semantics contrasts with syntax, the study of the combinatory of units of a language (without reference to their meaning), and pragmatics, the study of the relationships between the symbols of a language, their meaning, and the users of the language. Linguistic semantics is the study of meanings that humans use language to express. Other forms of semantics include the semantics of programming languages, formal logics, and semiotics.
It is the study of meaning. It typically focuses on the relation between signifiers, such as words, phrases, signs and symbols, and what they stand for.