Lexical And Pragmatic Considerations Of Input English Language Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Increased access to computer-based tools has made only too clear the deficiencies in our ability to produce effective user interfaces (Baecker, 1980a). Many of our current problems are rooted in our lack of sufficiently powerful theories and methodologies. User interface design remains more of a creative art than a hard science.

Following an age-old technique, the point of departure for much recent work has been to attempt to impose some structure on the problem domain. Perhaps the most significant difference between this work and earlier efforts is the weight placed on considerations falling outside the scope of conventional computer science. The traditional problem-reduction paradigm is being replaced by a holistic approach which views the problem as an integration of issues from computer science, electrical engineering, industrial design, cognitive psychology, psychophysics, linguistics, and kinesthetics.

In the main body of this paper, we examine some of the taxonomies which have been proposed and illustrate how they can serve as useful structures for relating studies in user interface problems. In so doing, we attempt to augment the power of these structures by developing their ability to take into account the effect of gestural and positional factors on the overall effect of the user interface.


One structure for viewing the problem domain of the user interface is provided by Foley and Van Dam (1982). They describe the space in terms of the following four layers:





The conceptual level incorporates the main concepts of the system as seen by the user. Therefore, Foley and Van Dam see it as being equivalent to the user model. The semantic level incorporates the functionality of the system: what can be expressed. The syntactic level defines the grammatical structure of the tokens used to articulate a semantic concept. Finally, the lexical component defines the structure of these tokens.

One of the benefits of such a taxonomy is that it can serve as the basis for systems analysis in the design process. It also helps us categorize various user interface studies so as to avoid "apples and bananas" type of comparisons. For example, the studies of Ledgard, Whiteside, Singer and Seymour (1980) and Barnard, Hammond, Morton and Long (1981) both address issues at the syntactic level. They can, therefore, be compared(which is quite interesting since they give highly contradictory results) [2 ]. On the other hand, by recognizing the "keystroke" model of Card, Moran and Newell (1980b) as addressing the lexical level, we have a good way of understanding its limitations and comparing it to related studies (such as Embley, Lan, Leinbaugh and Nagy, 1978), or relating it to studies which address different levels (such as the two studies in syntax mentioned above).

While the taxonomy presented by Foley and Van Dam has proven to be a useful tool, our opinion is that it has one major shortcoming. That is, the grain of the lexical level is too coarse to permit the full benefit of the model to be derived. As defined, the authors lump together issues as diverse as: how tokens are spelt (for example "add" vs "append" vs "a" vs some graphical icon)

where items are placed spatially on the display (both in terms of the layout and number of windows, and the layout of data within those windows)

where devices are placed in the work station

the type of physical gesture (as determined by the transducer employed) used to articulate a token (pointing with a joystick vs a lightpen vs a tablet vs a mouse, for example)

These issues are sufficiently different to warrant separate treatment. Grouping them under a single heading has the danger of generating confusion comparable to that which could result if no difference was made between the semantic and syntactic levels. Therefore, taking our cue from work in language understanding research in the AI community, we chose to subdivide Foley and Van Dam's lexical level into the following two components:

lexical: issues having to do with spelling of tokens (i.e., the ordering of lexemes and the nature of the alphabet used - symbolic or iconic, for example).

pragmatic: issues of gesture, space and devices.

To illustrate the distinction, in the Keystroke model the number of key pushes would be a function of the lexical structure while the homing time and pointing time would be a function of pragmatics.

Factoring out these two levels helps us focus on the fact that the issues affecting each are different, as is their influence on the overall effect of the user interface. This is illustrated in examples which are presented later in this paper.

It should be pointed out that our isolation of what we have called pragmatic issues is not especially original. We see a similar view in the Command Language Grammar of Moran (1981), which is the second main taxonomy which we present. Moran represents the domain of the user interface in terms of three components, each of which is sub-divided into two levels. These are as follows:

Conceptual Component

- task level

- semantic level

Communication Component

- syntactic level

- interaction level

Physical Component

- spatial level

- device level

The task level encompasses the set of tasks which the user brings to the system and for which it is intended to serve as a tool. The semantic level lays out the conceptual entities of the system and the conceptual operations upon them. As with the Foley and Van Dam model, the syntactic level then incorporates the structure of the language within which the semantic level is embedded.

The interaction level relates the user's physical actions to the conventions of the interactions in the dialogue. The spatial level then encompasses issues related to how information is laid out on the display, while the device level covers issues such as what types of devices are used and their properties (for example, the effect on user performance if the locator used is a mouse vs an isometric joystick vs step-keys). (A representative discussion of such issues can be found in Card, English and Burr, 1978).

One subtle but important emphasis in Moran's paper is on the point that it is the effect of the user interface as a whole (that is, all levels combined) which constitutes the user's model. The other main difference of his taxonomy, when compared to that of Foley and Van Dam, is his emphasis on the importance of the physical component. A shortcoming, however, lies in the absence of a slot which encapsulates the lexical level as we have defined it above. Like the lexical level (as defined by Foley and Van Dam), the interaction level of Moran appears a little too broad in scope when compared to the other levels in the taxonomy.



In examining the two studies discussed above, one quickly recognizes that the effect of the pragmatic level on the user interface, and therefore on the user model, is given very little attention. Moran, for example, points out that the physical component exists and that it is important, but does not discuss it further. Foley and Van Dam bury these issues within the lexical level. Our main thesis is that since the primary level of contact with an interactive system is at the level of pragmatics, this level has one of the strongest effects on the user's perception of the system. Consequently, the models which we adopt in order to specify, design, implement, compare and evaluate interactive systems must be sufficiently rich to capture and communicate the system's properties at this level. This is clearly not the case with most models, and this should be cause for concern. To illustrate this, let us examine a few case studies which relate the effect of pragmatics to:

pencil-and-paper tests of query languages

ease of use with respect to action language grammars

device independence


As an aid to the design of effective data base query languages, Reisner (1977) has proposed the use of pencil-and-paper tests. Subjects were taught a query language in a class-room environment and then tested as to their ability to formulate and understand queries. Different control groups were taught different languages. By comparing the test results of the different groups, Reisner drew conclusions as to the relative "goodness" of structure and ease of learning of the different languages. She then made the argument that the technique could be used to find weaknesses in new languages before they are implemented, thereby shortening their development cycle.

While the paper makes some important points, it has a serious defect in that it does not point out the limitations of the technique. The approach does tell us something about the cognitive burden involved in the learning of a query language. But it does not tell us everything. In particular, the technique is totally incapable of taking into account the effect that the means and medium of doing something has on our ability to remember how to do it. To paraphrase McLuhan, the medium does affect the message.

Issues of syntax are not independent of pragmatics, but pencil-and-paper tests cannot take such dependencies into account. For example, consider the role of "muscle memory" in recalling how to perform various tasks. The strength of its influence can be seen in my ability to type quite effectively, even though I am incapable of telling you where the various characters are on my QWERTY keyboard, or in my ability to open a lock whose combination I cannot recite. Yet, this effect will never show up in a pencil-and-paper test. Another example is seen in the technique's inability to take into account the contribution that appropriate feedback and help mechanisms can provide in developing mnemonics and other memory and learning aids.

We are not trying to claim that such pencil-and-paper tests are not of use (although Barnard et al, 1981, point out some important dangers in using such techniques). We are simply trying to illustrate some of their limitations, and demonstrate that lack of adequate emphasis on pragmatics can result in readers (and authors) drawing false or misleading conclusions from their work. Furthermore, we conjecture that if pragmatics were isolated as a separate level in a taxonomy such as that of Foley and Van Dam, they would be less likely to be ignored.



In another study, Reisner (1981) makes an important contribution by showing how the analysis of the grammar of the "action language" of an interactive system can provide valuable metrics for predicting the ease of use and proneness to error of that system. Thus, an important tool for system design, analysis and comparison is introduced.

The basis of the technique is that the complexity of the grammar is a good metric for the cognitive burden of learning and using the system. Grammar complexity is measured in terms of number of productions and production length. There is a problem, however, which limits our ability to reap the full benefits of the technique. This has to do with the technique's current inability to take into account what we call chunking. By this we mean the phenomenon where two or more actions fuse together into a single gesture (in a manner analogous to the formation of a compound word in language). In many cases, the cognitive burden of the resulting aggregate may be the equivalent of a single token. In terms of formal language theory, a non-terminal when effected by an appropriate compound gesture may carry the cognitive burden of a single terminal.

Such chunking may be either sequential, parallel or both. Sequentially, it should be recognized that some actions have different degrees of closure than others. For example, take two events, each of which is to be triggered by the change of state of a switch. If a foot-switch similar to the high/low beam switch in some cars is used, the down action of a down/up gesture triggers each event. The point to note is that there is no kinesthetic connection between the gesture that triggers one event and that which triggers the other. Each action is complete in itself and, as with driving a car, the operator is free to initiate other actions before changing the state of the switch again.

On the other hand, the same binary function could be controlled by a foot pedal which functions like the sustain pedal of a piano. In this case, one state change occurs on depression, a second on release. Here, the point to recognize is that the second action is a direct consequent of its predecessor. The syntax is implicit, and the cognitive burden of remembering what to do after the first action is minimal.

There are many cases where this type of kinesthetic connectivity can be bound to a sequence of tokens which are logically connected. One example given by Buxton (1982) is in selecting an item from a graphics menu and "dragging" it into position in a work space. A button-down action (while pointing at an item) "picks it up". For as long as the button is depressed, the item tracks the motion of the pointing device. When the button is released, the item is anchored in its current position. Hence, the interface is designed to force the user to follow proper syntax: select then position. There is no possibility for syntactic error, and cognitive resources are not consumed in trying to remember "what do I do next?". Thus, by recognizing and exploiting such cases, interfaces can be constructed which are "natural" and easy to learn.

There is a similar type of chunking which can take place when two or more gestures are articulated at one time. Again we can take an example from driving a car, where in changing gears the actions on the clutch, accelerator and gear-shift reinforce one another and are coordinated into a single gesture. Choosing appropriate gestures for such coordinated actions can accelerate their bonding into what the user thinks of as a single act, thereby freeing up cognitive resources to be applied to more important tasks. What we are arguing here is that by matching appropriate gestures with tasks, we can help render complex skills routine and gain benefits similar to those seen at different level in Card, Moran and Newell (1980a).

In summary, there are three main points which we wish to make with this example:

there is an important interplay between the syntactic and lexical levels and the pragmatic level

that this interplay can be exploited to reduce the cognitive burden of learning and using a system

that this cannot be accomplished without a better understanding of pragmatic issues such as chunking and closure.


We began by declaring the importance of being able to incorporate pragmatic issues into the models which we use to specify, design, compare and evaluate systems. The examples which followed then illustrated some of the reasons for this belief. When we view the CORE proposal (GSPC, 1977; GSPC, 1979) from this perspective, however, we see several problems. The basis of how the CORE system approaches input is to deal with user actions in terms of abstractions, or logical devices (such as "locators" and "valuators"). The intention is to facilitate software portability. If all "locators", for example, utilized a common protocol, then user A (who only had a mouse) could easily implement software developed by B (who only had a tablet).

From the application programmer's perspective, this is a valuable feature. However, for the purposes of specifying systems from the user's point of view, these abstractions are of very limited benefit. As Baecker (1980b) has pointed out, the effectiveness of a particular user interface is often due to the use of a particular device, and that effectiveness will be lost if that device were replaced by some other of the same logical class. For example, we have a system (Fedorkow, Buxton & Smith, 1978) whose interface depends on the simultaneous manipulation of four joysticks. Now in spite of tablets and joysticks both being "locator" devices, it is clear that they are not interchangeable in this situation. We cannot simultaneously manipulate four tablets. Thus, for the full potential of device independence to be realized, such pragmatic considerations must be incorporated into our overall specification model so that appropriate equivalencies can be determined in a methodological way. (That is, in specifying a generic device, we must also include the required pragmatic attributes. But to do so, we must develop a taxonomy of such attributes, just as we have developed a taxonomy of virtual devices.)

Figure 1: Taxonomy of Input Devices.

Continuous manual input devices are categorized. The first order categorization is property sensed (rows) and number of dimensions (columns). Subrows distinguish between devices that have a mechanical intermediary (such as a stylus) between the hand and the sensing mechanism (indicated by "M"), and those which are touch sensitive (indicated by "T"). Subcolumns distinguish devices that use comparable motor control for their operation.



In view of the preceding discussion, we have attempted to develop a taxonomy which helps isolate relevant characteristics of input devices. The tableau shown in Figure 1 summarizes this effort in a two dimensional representation. The remainder of this section presents the details and motivation for this tableau's organization.

To begin with, the tableau deals only with continuous hand controlled devices. (Pedals, for example, are not included for simplicity's sake.) Therefore the first (but implicit) questions in our structure are:

continuous vs discrete?

agent of control (hand, foot, voice, ...)?

The table is divided into a matrix whose rows and columns delimit

what is being sensed (position, motion or pressure), and

the number of dimensions being sensed (1, 2 or 3),

respectively. These primary partitions of the matrix are delimited by solid lines. Hence, both the rotary and sliding potentiometer fall into the box associated with one-dimensional position-sensitive devices (top left-hand corner).

Note that the primary rows and columns of the matrix are subdivided, as indicated by the dotted lines. The sub-columns exist to isolate devices whose control motion is roughly similar. These groupings can be seen in examining the two-dimensional devices. Here the tableau implies that tablets and mice utilize similar types of hand control and that this control is different from that shared in using a light-pen or touch-screen. Furthermore, it is shown that joysticks and trackballs share a common control motion which is, in turn, different than the other subclasses of

two-dimensional devices.

The rows for position and motion sensing devices are subdivided in order to differentiate between transducers which sense potential via mechanical vs touch-sensitive means. Thus, we see that the light-pen and touch-screen are closely related, except that the light-pen employs a mechanical transducer. Similarly, we see that trackball and TASA touch-pad [3] provide comparable signals from comparable gestures (the 4" by 4" dimensions of the TASA device compare to a 3 1/2" diameter trackball).

The tableau is useful for many purposes by virtue of the structure which it imposes on the domain of input devices. First, it helps in finding appropriate equivalencies. This is important in terms of dealing with some of the problems which arose in our discussion of device independence. For example, we saw a case where four tablets would not be suitable for replacing four joysticks. By using the tableau, we see that four trackballs will probably do.

The tableau makes it easy to relate different devices in terms of metaphor. For example, a tablet is to a mouse what a joystick is to a trackball. Furthermore, if the taxonomy defined by the tableau can suggest new transducers in a manner analogous to the periodic table of Mendeleev predicting new elements, then we can have more confidence in its underlying premises. We make this claim for the tableau and cite the "torque sensing" one-dimensional pressure-sensitive transducer as an example. To our knowledge, no such device exists commercially. Nevertheless

it is a potentially useful device, an approximation of which has been demonstrated by Herot and Weinzaphel (1978).

Finally, the tableau is useful in helping quantify the generality of various physical devices. In cases where the work station is limited to one or two input devices, then it is often in the user's interest to choose the least constraining devices. For this reason, many people claim that tablets are the preferred device since they can emulate many of the other transducers (as is demonstrated by Evans, Tanner and Wein, 1981). The tableau is useful in determining the degree of this generality by "filling in" the squares which can be adequately covered by the tablet.

Before leaving the topic of the tableau, it is worth commenting on why a primary criterion for grouping devices was whether they were sensitive to position, motion or pressure. The reason is that what is sensed has a very strong effect on the nature of the dialogues that the system can support with any degree of fluency. As an example, let us compare how the user interface of an instrumentation console can be affected by the choice of whether motion or position sensitive transducers are used. For such consoles, one design philosophy follows the traditional model that for every function there should be a device. One of the rationales behind this approach is to avoid the use of "modes" which result when a single device must serve for more than one function. Another philosophy takes the point of view that the number of devices required in a console need only be in the order of the control bandwidth of the human operator. Here, the rationale is that careful design can minimize the "mode" problem, and that the resulting simple consoles are more cost-effective and less prone to breakdown (since they have fewer devices).

One consequence of the second philosophy is that the same transducer must be made to control different functions, or parameters, at different times. This context switching introduces something known as the nulling problem. The point which we are going to make is that this problem can be completely avoided if the transducer in question is motion rather than position sensitive. Let us see why.

Imagine that you have a sliding potentiometer which controls parameter A. Both the potentiometer and the parameter are at their minimum values. You then raise A to its maximum value by pushing up the position of the potentiometer's handle. You now want to change the value of parameter B. Before you can do so using the same potentiometer, the handle of the potentiometer must be repositioned to a position corresponding to the current value of parameter B. The necessity of having to perform this normalizing function is the nulling problem.

Contrast the difficulty of performing the above interaction using a position-sensitive device with the ease of doing so using one which senses motion. If a thumb-wheel or a treadmill-like device was used, the moment that the transducer is connected to the parameter it can be used to "push" the value up or "pull" it down. Furthermore, the same transducer can be used to simultaneously change the value of a group of parameters, all of whose instantaneous values are different.



The above example brings up one important point: the different levels of the taxonomies of Foley and Van Dam or of Moran are not orthogonal. By describing the user interface in terms of a horizontal structure, it is very easy to fall into the trap of believing that the effect of modifications at one level will be isolated. This is clearly not true as the above example demonstrated: the choice of transducer type had a strong effect on syntax.

The example is not isolated. In fact, just as strong an argument could be made for adopting a model based on a vertical structure as the horizontal ones which we have discussed. Models based on interaction techniques such as those described in Martin (1973) and Foley, Wallace and Chan (1981) are examples. With them, the primary gestalt is the transaction, or interaction. The user model is described in terms of the set and style of the interactions which take place over time. Syntactic, lexical and pragmatic questions become sub-issues.

Neither the horizontal or vertical view is "correct". The point is that both must be kept in mind during the design process. A major challenge is to adapt our models so that this is done in a well structured way. That we still have problems in doing so can be seen in Moran's taxonomy. Much of the difficulty in understanding the model is due to problems in his approach in integrating vertically oriented concepts (the interaction level) into an otherwise horizontal structure.

In spite of such difficulties, both views must be considered. This is an important cautionary bell to ring given the current trend towards delegating personal responsibilities according to horizontal stratification. The design of a system's data-base, for example, has a very strong effect on the semantics of the interactions that can be supported. If the computing environment is selected by one person, the data-base managed by another, the semantics or functional capability by another, and the "user interface" by yet another, there is in inherent danger that the decisions of one will adversely affect another. This is not to say that such an organizational structure cannot work. It is just imperative that we be aware of the pitfalls so that they can be avoided. Decisions made at all levels affect one another and all decisions potentially have an effect on the user model.



Two taxonomies for describing the problem domain of the user interface were described. In the discussion it was pointed out that the outer levels of the strata, those concerning lexical, spatial, and physical issues were neglected. The notion of pragmatics was introduced in order to facilitate focusing attention on these issues. Several examples were then examined which illustrated why this was important. In so doing, it was seen that the power of various existing models could be extended if we had a better understanding of pragmatic issues. As a step towards such an understanding, a taxonomy of hand controlled continuous input devices was introduced. It was seen that this taxonomy made some contribution towards addressing problems which arose in the case studies. It was also seen, however, that issues at this outer level of devices had a potentially strong effect on the other levels of the system. Hence, the danger of over-concentration on horizontal stratification was pointed out.

The work reported has made some contribution towards an understanding of the effect of issues which we have called pragmatics. It is, however, a very small step. While there is a great deal of work still to be done right at the device level, perhaps the biggest challenge is to develop a better understanding of the interplay among the different levels in the strata of a system. When we have developed a methodology which allows us to determine the gesture that best suits the expression of a particular concept, then we will be able to build the user interfaces which today are only a dream.



The ideas presented in this paper have developed over a period of time and owe much to discussions with our students and colleagues. In particular, a great debt is owed to Ron Baecker who was responsible for helping formulate many of the ideas presented. In addition, we would like to acknowledge the contribution of Alain Fournier, Russell Kirsch, Eugene Fiume and Ralph Hill in the intellectual development of the paper, and the help of Monica Delange in the preparation of the manuscript. Finally, we gratefully acknowledge the financial support of the National Sciences and Engineering Research Council of Canada.


Baecker, R. (1980a). Human-Computer Interactive Systems: A State-of-the-Art Review. In P. Kolers, E. Wrolftad & H. Bouma, Eds., Processing of Visible Language II, pp. 423 444. New York: Plenum.

Baecker, R. (1980b). Towards an Effective Characterization of Graphical Interaction. In R. A. Guedj, P. Ten Hagen, F. Hop good, H. Tucker & D. Duce, Eds., Methodology of Interaction, pp. 127 - 148. Amsterdam: North-Holland.

Barnard, P., Hammond, N., Morton, J. and Long, J. (1981). Con sistency and Compatibility in Human-Computer Dialogue. International Journal of Man-Machine Studies 15, 87 - 134.

Buxton, W. (1982). An Informal Study of Selection-Positioning Tasks. Proceedings of Graphics Interface '82, Toronto, 323 328.

Card, S., English, W. & Burr, B. (1978). Evaluation of Mouse, Rate-Controlled Isometric Joystick, Step Keys, and Text Keys for Text Selection on a CRT. Ergonomics 8, 601 - 613.

Card, S., Moran, T. & Newell, A. (1980a). Computer Text Editing: an Information-Processing Analysis of a Routine Cognitive Skill. Cognitive Psychology 12, 32 - 74.

Card, S., Moran, T. & Newell, A. (1980b). The Keystroke-Level Model for User Performance Time with Interactive Systems. Communications of the ACM 23 (7), 396 - 410.

Embley, D., Lan, M., Leinbaugh, D. & Nagy, G. (1978). A Pro cedure for Predicting Program Editor Performance from the User's Point of View. International Journal of Man-Machine Studies 10, 639-650.

Evans, K., Tanner, P. & Wein, M. (1981). Tablet-Based Valuators That Provide One, Two, or Three Degrees of Freedom. Computer Graphics 15 (3), 91 - 97.

Fedorkow, G., Buxton, W. & Smith, K. C. (1978). A Computer Con trolled Sound Distribution System for the Performance of Electroacoustic Music. Computer Music Journal 2 (3), 33-42.

Foley, J., Wallace, V. & Chan, P. (1981). The Human Factors of Interaction Techniques. Technical Report GWU-IIST-81-03, Washington: The George Washington University, Institute for Information Science and Technology.

Foley, J. & Van Dam, A. (1982). Fundamentals of Interactive Computer Graphics. Reading, MA: Addison Wesley.

GSPC (1977). Status Report of the Graphics Standards Planning Committee. Computer Graphics 11

GSPC (1979). Status Report of the Graphics Standards Committee. Computer Graphics 13 (3)

Herot, C. & Weinzaphel, G. (1978). One-Point Touch Input of Vec tor Information for Computer Displays. Computer Graphics 12 (3), 210 - 216.

Ledgard, H., Whiteside, J., Singer, A. & Seymour, W. (1980). The Natural Language of Interactive Systems. Communications of the ACM 23 (10), 556 - 563.

Martin, J. (1973). Design of Man-Computer Dialogues. Engelwood Cliffs, NJ: Prentice-Hall.

Moran, T. (1981). The Command Language Grammar: a Representation for the User Interface

of Interactive Computer Systems. International Journal of Man-Machine Studies 15, 3 -


Reisner, P. (1977). Use of Psychological Experimentation as an Aid to Development of a Query

Language. IEEE Transactions on Software Engineering 3 (3), 218 - 229.

Reisner, P. (1981). Formal Grammar and Human Factors Design of an Interactive Graphics

System. IEEE Transactions on Software Engineering 7 (2), 229 - 240.


1 Barnard et al invalidate Ledgard et al's main thesis that the syntax of natural language is necessarily the best suited for command languages. They demonstrate cases where fixed-field format is less prone to user error than the direct object- indirect object syntax of natural language. A major problem of the paper of Ledgard et al is that they did not test many of the interesting cases and then drew conclusions that went beyond what their results supported.

2 The TASA X-Y 360 is a 4" by 4" touch sensitive device which gives 60 units of delta modulation in 4 inches of travel. The device is available from TASA, 2346 Walsh Ave., Santa Clara CA, 95051.

Lexical Approach 1 - What does the lexical approach look like?

Carlos Islam, The University of Maine

Ivor Timmis, Leeds Metropolitan University

This article looks at the theories of language which form the foundations of the lexical approach to teaching English.


The theory of language

Principle 1 - Grammaticalised Lexis

Principle 2 - Collocation in action

About the Authors

Further Reading


The principles of the Lexical Approach have been around since Michael Lewis published 'The Lexical Approach' 10 years ago. It seems, however, that many teachers and researchers do not have a clear idea of what the Lexical Approach actually looks like in practice.

In this first of two THINK articles we look at how advocates of the Lexical Approach view language. In our second THINK article we apply theories of language learning to a Lexical Approach and describe what lexical lessons could look like.

We have also produced two TRY pieces containing teaching materials for you to try out in your own classrooms. Your feedback, opinions, comments and suggestions would be more than welcome and used to form the basis of a future article.

The theory of language

Task 1

Look at this version of the introduction. What do the parts printed in bold in square brackets have in common?

The principles of the Lexical Approach have [been around] since Michael Lewis published 'The Lexical Approach' [10 years ago]. [It seems, however, that] many teachers and researchers do not [have a clear idea of] what the Lexical Approach actually [looks like] [in practice].

All the parts in brackets are fixed or set phrases. Different commentators use different and overlapping terms - 'prefabricated phrases', 'lexical phrases', 'formulaic language', 'frozen and semi-frozen phrases', are just some of these terms. We use just two: 'lexical chunks' and 'collocations'.

'Lexical chunk' is an umbrella term which includes all the other terms. We define a lexical chunk as any pair or group of words which is commonly found together, or in close proximity.

'Collocation' is also included in the term 'lexical chunk', but we refer to it separately from time to time, so we define it as a pair of lexical content words commonly found together. Following this definition, 'basic' + 'principles' is a collocation, but 'look' + 'at' is not because it combines a lexical content word and a grammar function word. Identifying chunks and collocations is often a question of intuition, unless you have access to a corpus.

Here are some examples.

Lexical Chunks (that are not collocations)

by the way

up to now

upside down

If I were you

a long way off

out of my mind

Lexical Chunks (that are collocations)

totally convinced

strong accent

terrible accident

sense of humour

sounds exciting

brings good luck

Top of page

Principle 1- Grammaticalised lexis

In recent years it has been recognised both that native speakers have a vast stock of these lexical chunks and that these lexical chunks are vital for fluent production. Fluency does not depend so much on having a set of generative grammar rules and a separate stock of words - the 'slot and filler' or open choice principle - as on having rapid access to a stock of chunks:

"It is our ability to use lexical phrases that helps us to speak with fluency. This prefabricated speech has both the advantages of more efficient retrieval and of permitting speakers (and learners) to direct their attention to the larger structure of the discourse, rather than keeping it narrowly focused on individual words as they are produced" (Nattinger and DeCarrico 1992).

The basic principle of the lexical approach, then, is: "Language is grammaticalised lexis, not lexicalised grammar"(Lewis 1993). In other words, lexis is central in creating meaning, grammar plays a subservient managerial role. If you accept this principle then the logical implication is that we should spend more time helping learners develop their stock of phrases, and less time on grammatical structures.

Let's look at an example of lexical chunks or prefabricated speech in action:

Chris: Carlos tells me Naomi fancies him.

Ivor:: It's just a figment of his imagination.

According to the theory we have just outlined, it is not the case that Ivor has accessed 'figment' and 'imagination' from his vocabulary store and then accessed the structure: it+to be+ adverb + article + noun + of + possessive adjective + noun from the grammar store. It is more likely that Ivor has accessed the whole chunk in one go. We have, in Peters' words, in addition to vocabulary and grammar stores, a 'phrasebook with grammatical notes'. Probably, the chunk is stored something like this:

It is/was + (just/only) + a figment of + possessive + imagination

Accessing, in effect, 8 words in one go allows me to speak fluently and to focus on other aspects of the discourse - more comments about Carlos, for example. We can make 2 more points about this example:

A number of friends and colleagues were asked to give an example of the word 'figment'. They all gave an example which corresponds to our chunk above. When asked to define the word 'figment', hardly anyone could do this accurately. This is an example of how native speakers routinely use chunks without analysing the constituent parts.

There is nothing intrinsically negative in the dictionary definition of the word 'figment', yet it is always, in my experience, used dismissively or derisively. This is an example of how we store information about a word which goes beyond its simple meaning.

Principle 2 - Collocation in action

In an application form a candidate referred to a 'large theme' in his thesis. This sounded ugly, but there is nothing intrinsically ugly about either word, it's just a strange combination to a native-speaker ear. In the Lexical Approach, sensitising students to acceptable collocations is very important, so you might find this kind of task:

Underline the word which does not collocate with 'theme':

main theme / large theme / important theme / central theme / major theme

Task 2

Complete the following sentences with as many different words as you can.

(a) The Lexical Approach has had a strong…………….on me.

(b) Carlos and Ivor ……………..me to try out the Lexical Approach.

A second important aspect of the Lexical Approach is that lexis and grammar are closely related. If you look at the examples above, you will see in (a) that 3 semantically related words - impact, influence, effect - behave the same way grammatically: have a/an impact/influence/effect on something. In (b) verbs connected with initiating action - encourage, persuade, urge, advise etc all follow the pattern verb + object + infinitive. This kind of 'pattern grammar' is considered to be important in the Lexical Approach.

About the authors

Carlos Islam teaches ESL and Applied Linguistics at the University of Maine. He is also involved in materials writing projects, editing Folio (the journal of the Materials Development Association www.matsda.org.uk ) and language acquisition research.

Ivor Timmis is Lecturer in Language Teaching and Learning at Leeds Metropolitan University. He teaches on the MA in Materials Development for Language Teachers, works on materials development consultancies and is also involved in corpus linguistic research.

Further reading

Baigent, Maggie (1999). Teaching in chunks: integrating a lexical approach. Modern English Teacher 8(2): 51-54.

Lewis, Michael (1993), The Lexical Approach, Hove: Language Teaching Publications.

Lewis, Michael (1996). Implications of a lexical view of language. In Challenge And Change In Language Teaching, Jane Willis and Dave Willis (eds.). Oxford: Heinemann.

Lewis, Michael (1997). Implementing the Lexical Approach: Putting Theory Into Practice. Hove: Language Teaching Publications.

Lewis, Michael (2000). Language in the lexical approach. In Teaching Collocation: Further Developments In The Lexical Approach, Michael Lewis (ed.), 126-154. Hove: Language Teaching Publications.

Nattinger, James R. and DeCarrico Jeanette S. (1992). Lexical Phrases and Language Teaching. Oxford: Oxford University Press.

Pawley, Andrew and Syder, Frances Hodgetts. (1983). Two puzzles for linguistic theory: native like selection and native like fluency. In Language And Communication, Jack C. Richards and Richard W. Schmidt (eds.), 191-225. London: Longman.

Thornbury, Scott (1997). Reformulation and reconstruction: tasks that promote 'noticing'. ELT Journal 51(4): 326-334.

Thornbury, Scott (1998). The Lexical Approach: a journey without maps? Modern English Teacher 7(4): 7-13.

Willis, Dave (1990). The Lexical Syllabus: A New Approach To Language Learning. London: Collins ELT.

Woolard, George (2000). Collocation- encouraging learner independence. In Teaching Collocation: Further Developments In The Lexical Approach, Michael Lewis (ed.), 28-46. Hove: Language Teaching Publications.

Readers' comments

Elisabeth Boeck, Germany

From the lexical approach activities in the TRY section I especially found the piece MY BEST FRIEND KYLE a treasure trove of lexical items. The suggestion to highlight texts for lexical chunks when presenting them in class as a means to sensitize students to the phenomena is, to my mind, particularly effective; and I could imagine, when it comes to reproduction, perhaps it might be useful for the teacher to gap-read the text not in one go but paragraph by paragraph for better retention on the part of the students.

Also, in my experience, the value of the lexical approach is demonstrated beautifully and convincingly by juxtaposing English and native language expressions. In that way students realize that in most cases a word-for-word translation won't help, when previously they perhaps thought that it might do to sling together a few words picked up from the dictionary. I like to say, when presenting idiomatic phrases, standard expressions, social and spoken language chunks etc. "That's the way native speakers typically say things."

And I remember Michael Lewis, in the course of a presentation which he gave here in Germany some years ago, saying this: "Whenever someone asks me "why is that?" - with reference to the structure of some language item - I will answer: "That's how it is in English." - Period!"

Lexical Approach 2 - What does the lexical approach look like?

Carlos Islam, The University of Maine

Ivor Timmis, Leeds Metropolitan University

This article looks at the theories of language which form the foundations of the lexical approach to teaching English.


The theory of learning


Language awareness

About the Authors

Further Reading


The principles of the Lexical Approach have been around since Michael Lewis published 'The Lexical Approach' 10 years ago. It seems, however, that many teachers and researchers do not have a clear idea of what the Lexical Approach actually looks like in practice.

In the first of our two THINK articles - Lexical approach 1 - we looked at how advocates of the Lexical Approach view language. In this, our second THINK article, we apply theories of language learning to a Lexical Approach and describe what lexical lessons could look like.

We have also produced two TRY pieces containing teaching materials for you to try out in your own classrooms. Your feedback, opinions, comments and suggestions would be more than welcome and used to form the basis of a future article.

A theory of learning

In our first THINK article, Lexical Approach 1, we spoke about the vast number of chunks and collocations native speakers store. According to Lewis (1997, 2000) native speakers carry a pool of hundreds of thousands, and possibly millions, of lexical chunks in their heads ready to draw upon in order to produce fluent, accurate and meaningful language. Too many items for teachers and materials to present to learners, ask learners to practise and then produce even if you believed that a PPP methodology - which has been denigrated in recent years - would lead to the acquisition of these language items.

How then are the learners going to learn the lexical items they need? One of the criticisms levelled at the Lexical Approach is its lack of a detailed learning theory. It is worth noting, however, that Lewis (1993) argues the Lexical Approach is not a break with the Communicative Approach, but a development of it. He makes a helpful summary of the findings from first language acquisition research which he thinks are relevant to second language acquisition:

Language is not learnt by learning individual sounds and structures and then combining them, but by an increasing ability to break down wholes into parts.

Grammar is acquired by a process of observation, hypothesis and experiment.

We can use whole phrases without understanding their constituent parts.

Acquisition is accelerated by contact with a sympathetic interlocutor with a higher level of competence in the target language.

Schmitt (2000) makes a significant contribution to a learning theory for the Lexical Approach by adding that 'the mind stores and processes these [lexical] chunks as individual wholes.' The mind is able to store large amounts of information in long term memory but its short term capacity is much more limited, when producing language in speech for example, so it is much more efficient for the brain to recall a chunk of language as if it were one piece of information. 'Figment of his imagination' is, therefore, recalled as one piece of information rather than four separate words.

In our view it is not possible, or even desirable, to attempt to 'teach' an unlimited number of lexical chunks. But, it is beneficial for language learners to gain exposure to lexical chunks and to gain experience in analyzing those chunks in order to begin the process of internalisation. We believe, like Lewis, that encouraging learners to notice language, specifically lexical chunks and collocations, is central to any methodology connected to a lexical view of language.

Top of page


Batstone (1996) describes noticing as 'a complex process: it involves the intake both of meaning and form, and it takes time for learners to progress from initial recognition to the point where they can internalize the underlying rule'. At the same time Lewis (2000) argues that noticing chunks and collocations is a necessary but not sufficient condition for input to become intake. If learners are not directed to notice language in a text there exists a danger that they will 'see through the text' and therefore fail to achieve intake.

Looking back at the tasks and activities in our TRY materials, you can see they are designed to promote noticing. Sometimes the noticing is guided by the teacher i.e. the teacher directs the students' attention to lexical features thought to be useful; sometimes the noticing is 'self-directed', i.e. the students themselves select features they think will be useful for them. Sometimes the noticing is explicit, e.g. when items in a text are highlighted; sometimes it is implicit e.g. when the teacher reformulates a student's text (see Thornbury 1997 for an explanation of how reconstruction and reformulation can enhance noticing and practical suggestions for reformulating).

Language Awareness

It is our assertion that learning materials and teachers can best help learners achieve noticing of lexical chunks by combining a Language Awareness approach to learning with a Lexical Approach to describing language.

Tomlinson (2003) sums up the principles, objectives and procedures of a language awareness approach as:

'Paying deliberate attention to features of language in use can help learners to notice the gap between their own performance in the target language and the performance of proficient users of the language.

Noticing can give salience to a feature, so that it becomes more noticeable in future input, so contributing to the learner's psychological readiness to acquire that feature.

The main objective is to help learners to notice for themselves how language is typically used so that they will note the gaps and 'achieve learning readiness' [as well as independence from the teacher and teaching materials].

The first procedures are usually experiential rather than analytical and aim to involve the learners in affective interaction with a potentially engaging text. [That is, learners read a text, and respond with their own views and opinions before studying the language in the text or answering comprehension type questions.]

Learners are later encouraged to focus on a particular feature of the text, identify instances of the feature, make discoveries and articulate generalizations about its use.'

In a small research project at The University of Maine, groups of students were exposed to materials (see TRY 1) based on the principles and procedures Tomlinson outlines. The noticing activities asked students to identify, analyse and make generalisations about lexical chunks and collocations.

The students involved in the research were surveyed after using these materials and asked how useful and enjoyable they found the materials.

All but one of the students said the materials were very useful and all the students reported the class was either very useful or useful.

All the students said the materials would help them learn independently.

Over half the students thought the materials were useful for learning vocabulary.

All the students said they enjoyed the stories.

The teachers said that the readings were 'great', the students understood and could appreciate the materials relevance for developing reading as well a productive skills.

One teacher said he was not sure if making the distinction between different types of lexical chunks was necessary.

We hope these THINK articles and TRY materials shine some light on what a Lexical Approach could look like in teaching materials and provide ideas of how it might appear in the classroom.

About the authors

Carlos Islam teaches ESL and Applied Linguistics at the University of Maine. He is also involved in materials writing projects, editing Folio (the journal of the Materials Development Association www.matsda.org.uk ) and language acquisition research.

Ivor Timmis is Lecturer in Language Teaching and Learning at Leeds Metropolitan University. He teaches on the MA in Materials Development for Language Teachers, works on materials development consultancies and is also involved in corpus linguistic research.