Face To Face Communication Speakers Use Gestures English Language Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Kendon (2004) mentions that different categorization schemes of co-speech gestures have been described during history. The first stems from the first century A.D. (Quintilianus, 1922). A lot of those categorisations schemes distinguished pointing gestures, the focus of this thesis, from other types of gestures: Quintilianus in 60 A.D.; Austin (1802) beginning 19th century; Wundt (1973), Efron (1972) and Ekman and Friesen (1969) along the 20th century. Hence, pointing gestures form a robust category. The most commonly used categorical distinction of co-speech gestures today (Kendon, 1988; McNeill, 1992) describes four types 1. The charateristics of those four types will be discussed in the following. Because pointing gestures are of special interest for this thesis, their characteristics will be highlighted in more detail. Iconic gestures provide an imagistic description of the semantic content of speech by representing concrete object attributes, actions or spatial relationships. For example, if a speaker says: "Please let me know if you will come to the movies this evening" and simultaneously moves his hand-palm up while imitating holding a mobile phone and typing the buttons with his thumb accompanying the word know. The meaning of this gesture would be that the speaker wants the listener to send a text message if he is coming to the movies this evening. Metaphoric gestures provide an imagistic description of the speech content as well, but as opposed to iconic gestures they represent an abstract idea in a concrete form. The following example is described by McNeill (1992). If a speaker says: "I have a question" and simultaneously forms his hand like a cup accompanying the word question, this cup gesture is either the question or the speakers' hand is ready to receive the answer to the question. In both

ways the cup gesture represents an abstract idea.

Beat gestures are small and quick movements that keep the rhythm of speech. As apposed to iconic and metaphoric gestures beats do not represent the semantic content of speech. It has been argued that, instead, beats represent the discourse-pragmatic structure of speech for example by marking new information. Finally, pointing gestures establish a link between a point in space and a referent. Pointing gestures can be divided into two subtypes (McNeill et al., 1993). Concrete pointing gestures have a physically present referent. For example, if you visit a friend for dinner he might say: "Please, have a seat" while pointing to an empty chair. Abstract pointing gestures, however, have a referent that is physically absent. For example, if your friend tells you about his holiday he may utter: "There were beautiful mountains", while pointing in front him. In abstract pointing a speaker quasi 'creates' a referent in space. Concrete and abstract pointings can be differentiated through the time of acquisition as well. Concrete pointings are the first gestures to develop in children (Bates et al., 1979) appearing already for the first birthday. Abstract pointings, however, are the last of all co-speech gestures to be acquiered appearing after the age of twelve. The development of co-speech gestures moves from concrete pointing gestures, through describing iconic gestures, finally to discourse-structuring gestures { beats, metaphorics and abstract pointings. While concrete pointings determine a more semantic content of speech, abstract pointings have a more pragmatic function, like introducing new information as a potential discourse topic in adult conversations and storytelling (McNeill, 1992). These four types of co-speech gestures, also called gesticulations, are to be distinguished from gestures like emblems (the peace sign, thumbs up or the victory sign), pantomime or signs (the building blocks of sign languages) (Kendon, 1988). If you move along what McNeill (1992) named Kendon's continuum, from gesticulations to emblems to pantomine to signs, the degree to which speech accompanies gestures degrees and the degree to which gestures show language properties increases. The main difference is that the latter three gesture types as apposed to gesticulations carry meaning in the absence of speech. Although gesture is the covering term for co-speech gestures, emblems, pantomime and signs it is also often used to refer to co-speech gestures in specific, which will be done here as well from now on.

1.1.2 The relation between gestures and speech

In this section, I will, first, discuss how gestures and speech carry meaning differently and, second, how these two information sources may work together as an integrated system. As already mentioned in the introduction McNeill (1992) claims that gesture and speech combine to reveal meaning that is not fully captured in one modality. The two modalities reect different aspects of an unitary underlying cognitive process because they both convey information in a different way. McNeill (1992) discusses three points on which gestures carry meaning differently than speech. First, in gestures the meaning of the whole projects on the meaning of the parts, while in speech the meaning of the parts (i.e., the words) are combined to create the meaning of the whole (i.e., the sentence). Speech is segmented and linear and therefore having a hierarchical structure. Gestures are global (i.e., the whole determines the parts) and synthetic (i.e., a gesture can combining many meanings) and never hierarchical. Second, gestures are noncombinatoric, while speech is combinatoric. Two gestures do not combine to form a larger meaning. In speech however, lower constituents combine in higher constituents. Third, speech is arbitrary while gestures are not. Arbitrariness means that there is no natural reason why a signal is attached to a particular concept. There are almost no natural links between words and their meanings

2. Gestures, however, depict their meanings.

They resemble what something looks like. As aposed to speech there are no conforming standards for making a gesture and there is no one-to-one mapping between form and meaning. As already discussed there are gestures that occur only during speech. The acts of gesturing and speaking seem to be bound to each other in time. There are two elements on how gestures and speech may be linked to each other (McNeill, 1992) that I want to explain further here. First, gesture and speech are semantically and pragmatically co-expressive. That is, gesture and speech present the same or related semantic meaning or pragmatic function. Second, gesture and speech are co-temporal. That is, gesture and speech are synchronous. The most meaningful phase of a gesture, the stroke, lines up temporally with the equivalent linguistic segment.

Time phases of co-speech gestures

The term stroke is established by Kendon (1972; 1980) . In his terminology a gesture phrase (i.e., what we intuitively call a gesture) typically passes trough three phases. The preparation phase is the time period from the beginning of the gesture movement up to the stroke. The hand moves from its rest position to the gesture space where the stroke begins. The stroke is the main phase of the gesture during which the peak effort of the movement occurs and meaning is expressed. As mentioned, the stoke lines up in time with the linguistic segments that are co-expressive with it. Finally, the retraction phase is the period from the end of the stroke up to the end of the gesture movement. The hand falls back in a resting position. The preparation phase and the retraction are optional but the stroke is considered as an obligatory element of the gesture phrase.