Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.
Are there vocal sounds that mean the same whatever your language? Discuss, with examples.
The presence of vocal sounds, sounds which are produced via the human vocal tract, which convey the same meaning whatever your language will as a result be sounds that are universally recognised, both in terms of the sound being identified and with regard to the message the sound carries. Considering the lack of cultural exposure between certain language groups (Saul, 2014), vocal sounds with cross-linguistic meanings point towards evolutionary adaptations which by their very nature are inherently universal. The following essay shall show that there are vocal sounds that mean the same whatever your language, it shall do this both by discussing studies that provide evidence for vocal sounds with cross-linguistic meaning, as well as explaining these vocal sounds in an evolutionary context; thereby affirming them as sounds which carry universal meanings no matter what the recipients native language is.
Evidence of laughter in our evolutionary relatives such as chimps, (Falk, 2004) and even more distant mammalian relatives such as dogs and rats (Panksepp, 2007) clearly points towards its status as an evolutionary adaption; one which would be universal and therefore be considered a vocal sound which means the same whatever one’s language. Further studies indicate that laughter in both humans and non-human primates involve similar neural structures, such as parts of the limbic system (Meyer, Baumann, Wildgruber, & Alter, 2007; Scott, Lavan, Chen, & Mcgettigan, 2014) and mechanisms involved in endorphin activation linked to positive affective states (Scott et al., 2014). Its status as a universal evolutionary inherited trait is further confirmed by its presence in congenitally blind and deaf infants (Meyer, 2007) who are quite clearly born without the ability to hear or otherwise perceive laughter and therefore who have not learnt to laugh via socialisation. Clearly laughter’s presence in non-human primates involving similar cortical structures and neural mechanisms, in addition to it being observed in the congenitally blind and deaf, pointing towards its presence as a biological evolutionary adaption; one which would quite clearly be universal and therefore is an example of a vocal sound which conveys meaning whatever one’s language is.
The context laughter takes place in further points to it being an evolutionary adaption; laughter is in itself innately social, we are around 30 times more likely to laugh in a social situation than when alone (Scott et al., 2014), this is mirrored in non-human primates where it frequently takes place in social situations appearing to facilitate bonding and social cohesion (Ross, Owren, & Zimmermann, 2009). Whilst non-human primate laughter typically occurs during physical contact (Provine, 1996), it is contextually comparable with human laughter due to this occurrence in social situations. It is this comparison both in terms of context and the underlying neural mechanisms which point towards a universal evolutionary adaption, one that continues to facilitate social bonding. Therefore similarities between human and non-human primate laughter point towards a level of biological inheritance, one which considered in an evolutionary context must be shared by all despite differences in terms of language use, meaning that laughter can clearly be seen as a vocal sound which means the same whatever one’s language.
However, laughter is not the only affective stimuli shown to carry meaning cross-linguistically. It is widely established that cross-cultural recognition of emotions exists (Sauter, Eisner, Ekman, & Scott, 2010), although this point is firmly embedded in the literature (Ekman, 1992) it fails to provide evidence for vocalisations that carry cross-linguistic meaning considering the environmental and visual contexts in which they are typically conveyed (Elfenbein & Ambady, 2002). Elfenbein and Ambady (2002) performed a meta-analysis on the universality of emotional recognition on 97 studies on 42 different regions, finding that whilst there was an in-group advantage for members of the same nation, region and/or language, emotions were universally recognized at above chance levels. Although their meta-analysis looked at studies using a range of channels to convey emotions, this above chance level remained when considering studies that focussed on vocal stimuli alone (Elfenbein & Ambady, 2002). Elfenbein and Ambady’s meta-analysis (2002) provides evidence that there are vocalisations that mean the same whatever your language, by statistically analysing a variety of studies and showing patterns of correlation between them the argument carries greater weight than considering one or two studies in isolation. Furthermore, it suggests that certain emotions are universally recognised and most likely that this is due to biological mechanisms (when one considers the lack of cultural exposure some groups have had with one another). Of course the presence of universal cognitive mechanisms which decode aspects of emotional vocalisations also mean that there are vocal sounds which mean the same whatever your language, as the emotions have been recognised from purely vocal stimuli and the meta-analysis supports the notion that this recognition is universal and therefore not dependent upon specific languages.
However removing multiple channels of communication such as facial expression and body language isn’t sufficient when one considers the linguistic context in which emotional vocalisations are usually realised (Pell et al., 2009); even to non-speakers a foreign language may convey linguistic features that somehow alter the meaning of vocal cues. In order to circumvented these potentially confounding effects speakers must express emotions through pseudo-utterances which mimic the morphosyntactic and phonotactic properties of the language presented (Scherer, Banse, & Wallbott, 2001). It therefore seems sensible to suggest that the recognition of emotions cross-culturally through the presentation of pseudo-utterances, presented independently from other potential cues (such as facial expression and body language) will provide substantial evidence for there being vocal sounds that mean the same whatever your language; after-all all other confounding factors will have been removed other than the vocal sound itself.
A number of studies using pseudo-utterances presented with purely vocal stimuli suggest that emotions can be recognised across languages by non-native speakers (Pell & Skorup, 2008; Pell et al,. 2009a; Pell, Paulmann, Dara, Alasseri, & Kotz, 2009b; Sauter et al., 2010). Although studies report a small in-group advantage when participants listen to pseudo-utterances based upon their native language (Pell et al., 2009b), similar results between non-native listeners suggests the presence of cross-linguistic vocal sounds with identical meanings, (Pell & Skorup, 2008; Sauter et al., 2010). This argument is further strengthened when considering studies involving participants from groups with little to no cultural exposure to each other, such as Sauter’s (2010) study with the Himba people of northern Namibia. Here the correct identification of emotions from purely vocal pseudo-utterances lends weight to the argument of cognitive mechanisms derived from universal evolutionary adaptations, capable of decoding meaning from vocal utterances. Clearly with no cultural exposure (which may have enabled the learning of emotional expression in another culture) and with the correct identification away from other potential cues (such as a linguistic framework and body-language), it seems highly probable that the identification of emotions cross-culturally is in part due to universal evolutionary adaptations, which in turn enable the existence of vocal sounds that mean the same whatever your language.
Cross-cultural data clearly provides evidence for vocal emotional expressions which exhibit core acoustic perceptual features that promote accurate recognition across languages (Pell & Skorup, 2008). The use of pseudo-utterances removes linguistic structure and language itself as confounding variables, meaning that emotions successfully conveyed and recognised must be done so through associated changes in prosody, such as changes in timing, pitch, volume and the rate of speech (Frick, 1985; Scherer, 1986). Furthermore, it appears that the expression of these discrete emotions corresponds with distinct modulation patterns (Pell, 2001), for example vocal expressions of sadness tend to be conveyed with a lower pitch and at a slower speaking rate in comparison to other emotional vocalisations (Pell et al., 2009b). It should also be noted that as well as being the most distinct from other emotional vocalisations (in terms of its prosodic elements), sadness is also frequently cited as being one of the most accurately identified from vocal stimuli (Pell et al., 2009a, 2009b). This increase in recognition along with its high distinctiveness in terms of modulation patterns provides further evidence for accurate recognition as being due to prosodic elements, clearly the correlation lends support to the aforementioned theory that emotional vocalisations are recognised due to the recognition of distinct prosodic patterns. This line of thought is further supported when one considers that emotions with less distinct prosodic patterns have been associated with lower rates of recognition; for example surprise and joy have been shown to possess similar prosodic elements (Pell et al., 2009b) and in turn have been reported at low accuracy rates, with surprise frequently being incorrectly categorised as joy (Pell et al., 2009a, 2009b). The presence of distinct prosodic elements in the vocalisation of emotions further explains our premise that there are distinct vocal sounds which mean the same whatever your language; this point is further supported by the correlation between the distinctiveness of a vocal expression’s prosodic elements and higher levels of accurate recognition.
Prosody has also been studied outside of emotional vocalisation, pointing to further universal cross-linguistic meanings such as dominance and submission, confidence and the signalling of a statement or question to the listener. Ohala (1984, 1996) claims that we associate fundamental frequency (f0) with sexual dimorphism, size and as a result dominance; with males’ lower and larger larynx’s leading to a lower f0 and more confident vocalisations (Hurford, 2014, p.77-80). Similarities can be drawn from avian and other mammalian vocalisations with regards to f0, with low f0 vocalisations frequently made by individuals with greater dominance (Morton, 1977). In turn Ohala’s (1984, 1996) claim is supported by a variety of evidence which shows low f0 voices to be interpreted as more masculine (Culver, as cited in Gussenhoven 2002; Junger et al., 2013) as well as being associated with dominant attributes such as confidence and leadership (Klofstad, Anderson, & Peters, 2012). Whilst these studies predominantly focus on vocalisations from a Western language base (such as English and Dutch), the comparison across species as well as the universal presence of larger, lower based larynx’s in human males (Hurford, 2014, p.77-80) suggests a universal evolutionary adaption, in which differences in the larynx’s size and location have evolved due to the selective advantage they provide as a result of the meanings low f0 vocalisations confer with regards to dominance, size and aggression. Studies comparing these affective interpretations across a broader range of languages would add further to Ohala’s conclusion (1984, 1996); however, it seems improbable that other language bases would offer different interpretations when considering the effect of low f0 vocalisations in our evolutionary ancestors, in addition to explanations concerning universal sexual differences of larynx size and location.
The affective interpretations of f0 have been taken further from signals of dominance to signalling the distinctions between questions and statements (Ohala, 1984; Gussenhoven 2002). This seems a logical step when considering the nature of questions being relatively uncertain in meaning, whilst it seems probable that statements will need to confer more certainty in order to convey a more authoritative status. This is confirmed by cross-linguistic studies showing that higher f0 towards the end of vocalisations are frequently perceived as questions (Hadding-Koch & Studdert-Kennedy, 1964; Gussenhoven & Chen, 2000). Ohala (1994) claims that this pattern is too wide spread to be explained by a common linguistic source, suggesting its existence due to universal evolutionary adaptations. Gussenhoven and Chen’s (2000) study should be highlighted for its use of three languages (Hungarian, Dutch and Chinese) quite distinct from each other both in terms of structure and due to their status as belonging to distinct separate language families; the fact that this interpretation of f0 is present in these three languages which have evolved separately removes the suggestion that its presence is tied to linguistic structure rather than universal evolutionary instilled cognitive mechanisms. Therefore cross-linguistic evidence suggests that a rise in f0 towards the end of a vocalisation signals a question whatever one’s language, again providing evidence for cross-linguistic meaning in vocal sounds.
To conclude, cross-linguistic studies support the claim that there are vocal sounds which mean the same whatever your language. Studies using pseudo-utterances remove the possibility of confounding variables such as linguistic structure or visual stimuli, showing that vocal sounds can carry information on affective states understood by the recipient whether or not they share a common language. Further cross-linguistic studies highlight the effect of prosody on meaning both in the deliverance of emotional vocal sounds as well as in a broader context; sounds which yet again have been shown to carry meaning across languages. Comparative research provides additional evidence for vocal sounds that carry meaning across languages, such as laughter in addition to displays of confidence and dominance. However, it remains important to consider these vocal sounds in an evolutionary context; vocal sounds with universal meanings must be understood as being due to biologically inherited adaptations when one considers the lack of exposure many language groups have had with one another.
Ekman,P. (1992). Are there basic emotions? Psychological Review, 99(3), 550-553. doi:10.1037/0033-295X.99.3.550
Elfenbein,H.A., & Ambady,N. (2002). On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128(2), 203-235. doi:10.1037//0033-2909.128.2.203
Falk,D. (2004). Prelinguistic evolution in early hominins: Whence motherese? Behavioral and Brain Sciences, 27, 491–541. doi:10.1017/S0140525X04000111
Frick,R.W. (1985). Communicating emotion: The role of prosodic features. Psychological Bulletin, 97(3), 412-429. doi:10.1037/0033-2909.97.3.412
Gussenhoven,C., & Chen,A. (2000). Universal and language-specific effects in the perception of question intonation. Proceedings of the 6th International Conference on Spoken Language Processing, 91-94.
Gussenhoven, C. (2002). Intonation and interpretation: phonetics and phonology. In Proceedings of Speech Prosody 2002, , Aix-en-Provence, France (pp. 47-57
Hadding-Koch,K., & Studdert-Kennedy,M. (1964). An experimental study of some intonation contours. Phonetica, 11, 175-185. doi:10.1159/000258338
Hurford,J.R. (2014). The origins of language: A slim guide. UK: OUP Oxford.
Junger,J., Pauly,K., Bröhr,S., Birkholz,P., Neuschaefer-Rube,C., Kohler,C., . . . Ute,H. (2013). Sex matters: Neural correlates of voice gender perception. NeuroImage, 79, .275-287. doi:10.1016/j.neuroimage.2013.04.105
Klofstad,C., Anderson,R., & Peters,S. (2012). Sounds like a winner: voice pitch influences perception of leadership capacity in both men and women. Proceedings of the Royal Society B: Biological Sciences, 279(1738), 2698-704. doi:10.1098/rspb.2012.0311
Meyer,M., Baumann,S., Wildgruber,D., & Alter,K. (2007). How the brain laughs. Behavioural Brain Research, 182(2), 245–260. doi:10.1016/j.bbr.2007.04.023
Morton,E.S. (1977). On the Occurrence and Significance of Motivation-Structural Rules in Some Bird and Mammal Sounds. American Naturalist, 01/1977(111), 855-869. doi:10.1086/283219
Panksepp,J. (2007). Neuroevolutionary sources of laughter and social joy: Modeling primal human laughter in laboratory rats. Behavioural Brain Research, 182, 231–244. doi:10.1016/j.bbr.2007.02.015
Ohala,J.J. (1984). An ethological perspective on common cross-language utilization of F0 of voice. Phonetica, 41, 1 – 16. doi:10.1159/000261706
Ohala,J.J. (1996). Ethological theory and the expression of emotion in the voice. Wilmington: University of Delaware, 3, 1812-1815. doi:10.1109/ICSLP.1996.607982
Pell,M.D. (2001). Influence of emotion and focus location on prosody in matched statements and questions. Journal of The Acoustical Society of America, 109(4), 1668–1680. doi:10.1121/1.1352088
Pell,M.D., Monetta,L., Paulmann,S., & Kotz,S.A. (2009). Recognizing emotions in a foreign language. Journal of Nonverbal Behavior, 33(2), 107-120. doi:10.1007/s10919-008-0065-7
Pell,M.D., Paulmann,S., Dara,C., Alasseri,A., & Kotz,S.A. (2009). Factors in the recognition of vocally expressed emotions: A comparison of four languages. Journal of Phonetics, 37(4), 417-435. doi:10.1016/j.wocn.2009.07.005
Pell,M.D., & Skorup,V. (2008). Implicit processing of emotional prosody in a foreign versus native language. Speech Communication, 50(6), 519-530. doi:10.1016/j.specom.2008.03.006
Provine,R.R. (1996). Laughter. Laughter. American Scientist, 84, 38-47. Retrieved from http://cogweb.ucla.edu/Abstracts/Provine_96.html
Provine,R.R., & Fischer,K.R. (1989). Laughing, Smiling, and Talking: Relation to Sleeping and Social Context in Humans. Ethology, 83(4), 295–305. doi:10.1111/j.1439-0310.1989.tb00536.x
Ross,M.D., Owren,M.J., & Zimmermann,E. (2009). Reconstructing the Evolution of Laughter in Great Apes and Humans. Current Biology, 3(2), 191–194. doi:10.1016/j.cub.2009.05.028
Saul,H. (2014, July 31). Amazonian Indian tribe filmed making contact with Brazil village in rare video footage – Americas – World – The Independent. Retrieved from http://www.independent.co.uk/news/world/americas/video-shows-amazonian-indian-tribe-making-contact-with-brazil-village-9640077.html
Sauter,D.A., Eisner,F., Ekman,P., & Scott,S.K. (2010). Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of The National Academy of Sciences, 107, 2408-2412. doi:10.1073/pnas.0908239106
Scherer,K.R. (1986). Vocal affect expression: A review and a model for future research. Psychological Bulletin, 99(2), 143-165. doi:10.1037//0033-2909.99.2.143
Scherer,K.R., Banse,R., & Wallbott,H.G. (2001). Emotion inferences from vocal expression correlate across languages and cultures. Journal of Cross-cultural Psychology, 32(1), 76-92. doi:10.1177/0022022101032001009
Scott,S.K., Lavan,N., Chen,S., & Mcgettigan,C. (2014). The social life of laughter. Trends in Cognitive Sciences, 18(12), 618-620. doi:10.1016/j.tics.2014.09.002
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: