business

The business essay below has been submitted to us by a student in order to help you with your studies.

The Elements Of The Grounded Theory

The three basic elements of grounded theory are concepts, categories and propositions. Concepts are the basic units of analysis since it is from conceptualisation of data, not the actual data per se, that theory is developed. Corbin and Strauss (1990, p. 7) state:

Theories can't be built with actual incidents or activities as observed or reported; that is, from "raw data." The incidents, events, happenings are taken as, or analysed as, potential indicators of phenomena, which are thereby given conceptual labels. If a respondent says to the researcher, "Each day I spread my activities over the morning, resting between shaving and bathing," then the researcher might label this phenomenon as "pacing." As the researcher encounters other incidents, and when after comparison to the first, they appear to resemble the same phenomena, then these, too, can be labelled as "pacing." Only by comparing incidents and naming like phenomena with the same term can the theorist accumulate the basic units for theory.

The second element of grounded theory, categories, are defined by Corbin and Strauss (1990, p. 7) thus:

Categories are higher in level and more abstract than the concepts they represent. They are generated through the same analytic process of making comparisons to highlight similarities and differences that is used to produce lower level concepts. Categories are the "cornerstones" of developing theory. They provide the means by which the theory can be integrated. We can show how the grouping of concepts forms categories by continuing with the example presented above. In addition to the concept of "pacing," the analyst might generate the concepts of "self-medicating," "resting," and "watching one's diet." While coding, the analyst may note that, although these concepts are different in form, they seem to represent activities directed toward a similar process: keeping an illness under control. They could be grouped under a more abstract heading, the category: "Self Strategies for Controlling Illness."

The third element of grounded theory are propositions which indicate generalised relationships between a category and its concepts and between discrete categories. This third element was originally termed 'hypotheses' by Glaser and Strauss (1967). It is felt that the term 'propositions' is more appropriate since, as Whetten (1989, p. 492) correctly points out, propositions involve conceptual relationships whereas hypotheses require measured relationships. Since the grounded approach produces conceptual and not measured relationships, the former term is preferred.

The generation and development of concepts, categories and propositions is an iterative process. Grounded theory is not generated a priori and then subsequently tested. Rather, it is,

... inductively derived from the study of the phenomenon it represents. That is, discovered, developed, and provisionally verified through systematic data collection and analysis of data pertaining to that phenomenon. Therefore, data collection, analysis, and theory should stand in reciprocal relationship with each other. One does not begin with a theory, then prove it. Rather, one begins with an area of study and what is relevant to that area is allowed to emerge. (Strauss and Corbin, 1990, p. 23. Emphasis added.)

The Process Of Grounded Theory Building

Five analytic (and not strictly sequential) phases of grounded theory building were identified: research design, data collection, data ordering, data analysis and literature comparison. Within these phases, nine procedures or steps were followed. These phases and steps were evaluated against four research quality criteria: construct validity, internal validity, external validity and reliability. Briefly, construct validity is enhanced by establishing clearly specified operational procedures. Internal validity is enhanced by establishing causal relationships whereby certain conditions are shown to lead to other conditions, as distinguished from spurious relationships. In this sense, internal validity addresses the credibility or "truth value" of the study's findings. External validity requires establishing clearly the domain to which the study's findings can be generalised. Here, reference is made to analytic and not statistical generalisation and requires generalising a particular set of findings to some broader theory and not broader population. Finally, reliability requires demonstrating that the operations of a study - such as data collection procedures - can be repeated with the same results.

Table 1 provides an overview of these phases, steps and tests and forms the template for the subsequent discussion which moves from a normative or prescriptive account of recommended activities to a descriptive account of how these prescriptions were applied in the study.

Table 1: The Process of Building Grounded Theory

PHASE

ACTIVITY

RATIONALE

RESEARCH DESIGN PHASE

Step 1

Review of technical

literature

Definition of research question

Definition of a priori constructs

Focuses

efforts

Constrains irrelevant variation and sharpens external validity

Step 2

Selecting cases

Theoretical, not random, sampling

Focuses efforts on theoretically useful cases (e.g., those that test and/or extend theory)

DATA COLLECTION PHASE

Step 3

Develop rigorous data collection protocol

Create case study

database

Employ multiple

data collection

methods

Qualitative and quantitative data

Increases reliability Increases construct validity

Strengthens grounding of theory by triangulation of evidence. Enhances internal validity

Synergistic view of evidence

Step 4

Entering the field

Overlap data

collection

and analysis

Flexible and opportunistic data collection methods

Speeds analysis and reveals

helpful adjustments to data collection

Allows investigators to take advantage of emergent themes and unique case features

DATA ORDERING PHASE

Step 5

Data ordering

Arraying events chronologically

Facilitates easier data analysis. Allows examination of processes

DATA ANALYSIS PHASE

Step 6

Analysing

data relating to

the first case

Use open

coding

Use axial

coding

Use selective

coding

Develop concepts, categories and properties

Develop connections between a category and its sub-categories

Integrate categories to build theoretical framework

All forms of coding enhance internal validity

Step 7

Theoretical sampling

Literal and theoretical replication across cases

(go to step 2 until theoretical saturation)

Confirms, extends, and sharpens theoretical framework

Step 8

Reaching closure

Theoretical saturation when possible

Ends process when marginal improvement becomes small

LITERATURE COMPARISON PHASE

Step 9

Compare emergent theory with extant literature

Comparisons with conflicting frameworks

Comparisons with similar frameworks

Improves construct definitions, and therefore internal validity

Also improves external validity by establishing the domain to which the study's findings can be generalised

Research Design Phase

Research design is defined by Easterby-Smith et al. (1990, p. 21) as,

... the overall configuration of a piece of research: what kind of evidence is gathered from where, and how such evidence is interpreted in order to provide good answers to the basic research question[s].

It follows logically that the first step is to define the basic research questions. These should be defined narrowly enough so that the research is focused and broad enough to allow for flexibility and serendipity.

A good source of research questions in grounded theory studies is the 'technical literature' (i.e., reports of research studies and theoretical and philosophical papers characteristic of professional and disciplinary writing) on the general problem area (Strauss and Corbin, 1990, p. 52).

Once basic research questions have been generated and the research is focused, the next aspect of research design and the second step is to select the first case. Cases (the principal units of data in this research) should be selected according to the principle of theoretical sampling:

The process of data collection for generating theory whereby the analyst jointly collects, codes, and analyses his data and decides what data to collect next and where to find them, in order to develop his theory as it emerges. (Glaser and Strauss, 1967, p. 45.)

Accordingly,

Unlike the sampling done in quantitative investigations, theoretical sampling cannot be planned before embarking on a grounded theory study. The specific sampling decisions evolve during the research process itself. (Strauss and Corbin, 1990, p. 192)

During initial data collection, when the main categories are emerging, a full 'deep' coverage of the data is necessary. Subsequently, theoretical sampling requires only collecting data on categories, for the development of properties and propositions. The criterion for judging when to stop theoretical sampling is the category's or theory's 'theoretical saturation'. By this term Glaser and Strauss refer to the situation in which:

... no additional data are being found whereby the (researcher) can develop properties of the category. As he sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated ... when one category is saturated, nothing remains but to go on to new groups for data on other categories, and attempt to saturate these categories also. (1967, p. 65.)

A qualification springs from the fact that not all categories are equally relevant, and accordingly the depth of enquiry into each one should not be the same. As a general rule, core categories, those with the greatest explanatory power, should be saturated as completely as possible. A theory is saturated when it is stable in the face of new data and rich in detail.

Theoretical sampling translates in practical terms into two sampling events. An initial case is selected and, on the basis of the data analysis pertaining to that case and hence the emerging theory, additional cases are selected.

The initial case (unit of data) in this study was the technical literature on the subject of corporate turnaround. Strauss and Corbin support this approach and state:

The literature can be used as secondary sources of data. Research publications often include quoted materials from interviews and field notes and these quotations can be used as secondary sources of data for your own purposes. The publications may also include descriptive materials concerning events, actions, settings, and actors' perspectives, that can be used as data using the methods described. (1990, p. 52.)

The grounded analysis of the first ('literature') case led to the generation of the initial theoretical framework of corporate turnaround. Additional ('empirical') cases were then selected, one at a time, to test and extend this framework.

To recall, according to the principle of theoretical sampling, each additional case should serve specific purposes within the overall scope of enquiry. Three options are identified by Yin (1989, p. 53-54):

(a)

choose a case to fill theoretical categories, to extend the emerging theory; and/or,

(b)

choose a case to replicate previous case(s) to test the emerging theory; or,

(c)

choose a case that is a polar opposite to extend the emerging theory.

Logically, this implies that each additional case must be carefully selected so that it produces similar results (a literal replication - options (a) and (b) above ); or, produces contrary results but for predictable reasons (a theoretical replication - option (c) above).

The second case or unit of data, Fisons plc which experienced a turnaround during the period 1975-84, was selected for the purpose of literal replication, that is, to fill theoretical categories and to test the emerging theory. The third case, British Steel Corporation (BSC) which experienced a turnaround during the period 1975-89 was again chosen for the purposes of literal replication.

After the analysis of the three cases, the marginal improvement to the theoretical framework was small. Theoretical saturation via literal replication had been approached and the decision to conclude the research was taken. This experience is corroborated by Martin and Turner (1986, p. 149) who state:

By the time three or four sets of data have been analysed, the majority of useful concepts will have been discovered.

Data Collection Phase

The grounded approach advocates the use of multiple data sources converging on the same phenomenon and terms these 'slices of data.' Glaser and Strauss (1967, p. 65) state,

In theoretical sampling, no one kind of data on a category nor technique for data collection is necessarily appropriate. Different kinds of data give the analyst different views or vantage points from which to understand a category and to develop its properties; these different views we have called slices of data. While the [researcher] may use one technique of data collection primarily, theoretical sampling for saturation of a category allows a multifaceted investigation, in which there are no limits to the techniques of data collection, the way they are used, or the types of data acquired. (Emphasis in original.)

Similarly, Eisenhardt (1989, p. 538) states:

... case study research can involve qualitative data only, quantitative only, or both.. Moreover, the combination of data types can be highly synergistic.

The synergy (or 'data triangulation') referred to works as follows: quantitative data can indicate directly observable relationships and corroborate the findings from qualitative data. Qualitative data can help understand the rationale of the theory and underlying relationships.

The use of multiple data sources thus enhances construct validity and reliability. The latter is further enhanced through the preparation of a case study database which is a formal assembly of evidence distinct from the case study report. Yin (1989, pp. 98-99) states:

Every case study project should strive to develop a formal, retrievable database, so that in principle, other investigators can review the evidence directly and not be limited to the written reports. In this manner, the database will increase markedly the reliability of an entire case study. (Emphasis added.)

To summarise, the third step is to develop a rigorous data collection protocol by employing multiple data collection methods using both qualitative and quantitative data and systematically establishing a case study database.

The principal data source in this study for the two 'empirical' cases (i.e., Fisons and BSC) was archival material in the form of reports in newspapers, trade journals, business journals, government publications, broker reviews, annual company documents and press releases. These data were extracted in computerised form (i.e., ASCII files) from the Reuter Textline and Predicasts PROMT (Predicasts Overview of Markets and Technology) databases. There are over 600 active information sources contributing to Reuter Textline, of which approximately 200 are primary sources and over 400 are associated sources provided by third party contributors. The earliest material on Reuter Textline dates from 1980. Predicasts is the largest on-line source of business information of its kind. The Predicasts family of complimentary databases contains more than 5,000,000 article abstracts, forecasts, statistical series and full text records from a broad range of business, industry and government sources. The earliest material on PROMT dates from 1975.

It was whilst reading a paper by Turner (1983) that the idea of developing grounded theory based on this type of data was formed. In that research project,

... documentary sources were treated like sets of field notes. Analysis and category generation was commenced at the first paragraph of the report, and a theoretical framework generated which would handle the aspects perceived to be of interest to each paragraph. (1983, p. 342.)

Case study databases were constructed within the qualitative data analysis software package ATLAS. A full discussion of the procedures followed is provided in the data analysis section below.

The fourth step was thus to ensure that data was collected and analysed simultaneously and flexibility is maintained. This overlap allows adjustments to be made to the data collection process in light of the emerging findings. Eisenhardt describes such flexibility as 'controlled opportunism'.

Data Ordering Phase

The fifth step was data ordering. Following Yin (1989, p. 119) data for the two 'empirical' cases were ordered chronologically:

The arraying of events into a chronology permits the investigator to determine causal events over time, because the basic sequence of a cause and its effect cannot be temporally inverted. However, unlike the more general time-series approaches, the chronology is likely to cover many different types of variables and not be limited to a single independent or dependent variable.

Data Analysis Phase

Once data were ordered, the sixth step was to analyse the data. Data analysis is central to grounded theory building research. For the study as a whole, data collection, data ordering, and data analysis were interrelated as depicted in figure 1 (the attached numbers indicate the activity's analytic sequence).

Figure 1: The Interrelated Processes of Data Collection,

Data Ordering, and Data Analysis to Build Grounded Theory

Data Analysis (4)

Theory Development (5)

Data Ordering (3)

Theory Saturation ?

Yes

Data Collection (2)

No

Reach

Closure

(6)

Theoretical Sampling (1)

Within this general framework, data analysis for each case involved generating concepts through the process of coding which,

... represents the operations by which data are broken down, conceptualised, and put back together in new ways. It is the central process by which theories are built from data. (Strauss and Corbin, 1990, p. 57.)

There are three types of coding: open coding, axial coding, and selective coding. These are analytic types and it does not necessarily follow that the researcher moves from open through axial to selective coding in a strict, consecutive manner.

Open coding refers to that part of analysis that deals with the labelling and categorising of phenomena as indicated by the data. The product of labelling and categorising are concepts - the basic building blocks in grounded theory construction.

Open coding requires application of what is referred to as 'the comparative method', that is, the asking of questions and the making of comparisons. Data are initially broken down by asking simple questions such as what, where, how, when, how much, etc. Subsequently, data are compared and similar incidents are grouped together and given the same conceptual label. The process of grouping concepts at a higher, more abstract, level is termed categorising.

Whereas open coding fractures the data into concepts and categories, axial coding puts those data back together in new ways by making connections between a category and its sub-categories (i.e., not between discrete categories which is done in selective coding). Thus, axial coding refers to the process of developing main categories and their sub-categories.

Selective coding involves the integration of the categories that have been developed to form the initial theoretical framework.

Firstly, a story line is either generated or made explicit. A story is simply a descriptive narrative about the central phenomenon of study and the story line is the conceptualisation of this story (abstracting). When analysed, the story line becomes the core category:

The core category must be the sun, standing in orderly systematic relationships to its planets. (Strauss and Corbin, 1990, p. 124.)

Subsidiary categories are related to the core category according to the paradigm model, the basic purpose of which is to enable the researcher to think systematically about data and relate them in complex ways. The basic idea is to propose linkages and look to the data for validation (move between asking questions, generating propositions and making comparisons). The basic features of this model are depicted in figure 2 below.

Figure 2: The Paradigm Model

Causal Conditions

Phenomenon

Context

Intervening Conditions

Action / Interaction Strategies

Consequences

The core category (i.e., the central idea, event or happening) is defined as the phenomenon. Other categories are then related to this core category according to the schema. Causal conditions are the events that lead to the development of the phenomenon. Context refers to the particular set of conditions and intervening conditions, the broader set of conditions, in which the phenomenon is couched. Action/interaction strategies refer to the actions and responses that occur as the result of the phenomenon and finally, the outcomes, both intended and unintended, of these actions and responses are referred to as consequences.

An important activity during coding is the writing of memos. Corbin and Strauss (1990, p. 10) maintain that,

Writing theoretical memos is an integral part of doing grounded theory. Since the analyst cannot readily keep track of all the categories, properties, hypotheses, and generative questions that evolve from the analytical process, there must be a system for doing so. The use of memos constitutes such a system. Memos are not simply "ideas." They are involved in the formulation and revision of theory during the research process.

At least three types of memo may be distinguished: code memos, theoretical memos and operational memos. Code memos relate to open coding and thus focus on conceptual labelling. Theoretical memos relate to axial and selective coding and thus focus on paradigm features and indications of process. Finally, operational memos contain directions relating to the evolving research design.

In the past, the tools used to aid the type of data analysis elucidated above were simply scissors, a copier and piles of blank paper. In this research project, data were analysed using the qualitative data analysis software package ATLAS which also facilitated the construction of case study databases. The use of computer programs to aid the analysis of qualitative data is a recent innovation:

... there has been considerable progress in the analysis of qualitative data using a variety of specially written computer programs ... There are at present around a dozen programs on the market or under development, each with different characteristics and facilities. (Lee and Fielding, 1991, p. 1.)

The principal advantage of using a program is that it simplifies and speeds the mechanical aspects of data analysis without sacrificing flexibility thereby freeing the researcher to concentrate to a greater extent on the more creative aspects of theory building:

The thinking, judging, deciding, interpreting, etc., are still done by the researcher. The computer does not make conceptual decisions, such as which words or themes are important to focus on, or which analytical step to take next. These analytical tasks are still left entirely to the researcher. (Tesch, 1991, pp. 25-26.)

Lee and Fielding summarise:

It is likely that computers will bring real benefits to qualitative researchers, making their work easier, more productive and potentially more thorough. (1991, p. 6.)

There are two modes of data analysis within ATLAS: firstly, the 'textual level' which focuses on the raw data and includes activities such as text segmentation and, coding and memo writing; and secondly, the 'conceptual level' which focuses on framework building activities such as interrelating codes, concepts and categories to form theoretical networks. In general, we found the procedures within ATLAS to be both efficient and firmly based on the principles of grounded theory generation.

Once a theoretical framework relating to the first case has been generated, the next and seventh step in theory building case research is to test and develop this framework by selecting additional cases according to the principle of theoretical sampling, that is, with the aim to extend and/or sharpen the emerging theory by filling in categories that may need further refinement and/or development. The eighth step, reaching closure, is taken according to the principle of theoretical saturation, that is, when the marginal value of the new data is minimal.

Literature Comparison Phase

The ninth and final step is to compare the emerged theory with the extant literature and examine what is similar, what is different, and why. Eisenhardt (1989, p. 545) states:

Overall, tying the emergent theory to existing literature enhances the internal validity, generalisability, and theoretical level of the theory building from case study research ... because the findings often rest on a very limited number of cases.

The emergent theory of corporate turnaround was compared with the extant theories in the broader field of strategic management. This revealed the discovered theory to resemble in many ways Pettigrew's "content-context-process" model of strategic change (1987).

An Overview of a Grounded Theory of Corporate Turnaround

Through the process of open and axial coding in ATLAS/ti a number of concepts and categories were generated and developed. During selective coding (i.e., the integration of categories) the core category was defined and labelled 'recovery strategy content'. The other major categories were then related to this category. The content of appropriate recovery strategies were found to be contingent upon six sets of contextual factors: the causes of decline; the severity of the crisis; the attitude of stakeholders; industry characteristics; changes in the macroeconomic environment; and, the firm's historical strategy. The content of recovery strategies was usefully decomposed into operational level actions (management change, improved controls, reduction in production costs, investment in plant and machinery, decentralisation, improved marketing, and restructuring finances) and strategic level actions (asset reduction/divestiture and product/market reorientation). An implementation or process dimension was also discovered. Successful actions to effect recovery fall into four distinct (but overlapping) stages (the management change stage, the retrenchment stage, the stabilisation stage, and the growth stage). A diagrammatical depiction of this framework is given in figure 3 (see Pandit, 1995 for a fuller discussion).

Figure 3: A Theoretical Framework of Corporate Turnaround

CONTEXTUAL FACTORS

The causes of decline

The severity of the crisis

The attitude of the stakeholders

Industry characteristics

Changes in the macroeconomic environment

The firm's historical strategy

IMPLEMENTATION / PROCESS OF RECOVERY ACTIONS

Management change stage

RECOVERY STRATEGY CONTENT

Retrenchment stage

Operational level:

Stabilisation stage

Management change

Growth stage

Improved controls

Reduction in production costs

Investment in plant and machinery

Decentralization

Improved marketing

Restructuring finances

Strategic level:

Asset reduction/divestiture

Product/market reorientation

53 propositions linking the concepts and categories within the framework were generated and tested. Table 2 lists a sample of five (see Pandit, 1995, pp. 277-278 for the full list).

Table 2: A Sample of Propositions Generated by the Literature Case and Supported by the Cases of Fisons and BSC

Fisons

BSC

Proposition Generated by the Literature Case

Explicitly Supported

Implicitly Supported

Not Referred To

Explicitly Supported

Implicitly Supported

Not Referred To

A sustained deterioration in performance is the result of bothinternal and external causes.

X

X

Successful turnaround firms are more severely affected in terms of financial performance in the downturn phase than unsuccessful recoveries.

X

X

If the causes of decline are primarily internal in origin, actions that improve efficiency at the operational level should be emphasised to effect successful recovery.

X

X

If the causes of decline are primarily external in origin, strategic level actions should be emphasised to effect successful recovery.

X

X

Appropriate recovery actions vary according to industry stage.

X

X

Reflections

With respect to the two lesser auxiliary objectives of this study I found firstly, that the data available from the on-line databases Reuters Textline and Predicasts PROMT to be extremely appropriate for this type of research. The hundreds of articles extracted provided a rich and diverse source of information for the two 'empirical' cases. Events were easily traced over time, differing viewpoints provided much intellectual stimulation, and reported interviews with key people at the time rather than retrospectively provided valuable insights and served as an efficient and effective substitute for conducting similar interviews ourselves. Ultimately, my assessment of the quality of the data is a tribute to the quality of business journalism in the UK, the USA and continental Europe (where most of the reports we analysed originated).

My second auxiliary objective was to assess the utility of computer-based qualitative data analysis software packages when used in conjunction with on-line data in grounded theory research. In general, I found the packages to be of limited use (rather than easing the process they tend to overcomplicate it) with much development required before they can make a significant impact on the conduct and quality of qualitative research. However, I found the package that I chose (ATLAS) to be very much the exception to the rule. A number of attributes distinguished it from the alternatives. Firstly, it is very 'user-friendly' and operates in a similar manner to the more widely used Windows package developed by Microsoft. Secondly, it is powerful. Given the immense volume of data to be analysed, problems were expected but thankfully never materialised. Finally, it is thoroughly based on the principles of grounded theory generation and therefore few compromises had to be made.

Five problems were encountered in this study. Four relate to the research process and one, more fundamentally, to the research approach. Firstly, the process of grounded theory research is extremely time-consuming. The sheer volume and complexity of data generated for this study was quite daunting, although the use of ATLAS aided matters considerably. Secondly, grounded theory research involves long periods of uncertainty. Without a priori hypotheses to test and established protocol to follow, much of the first half of the study period required a good measure of faith and hope. Thankfully, there did come a time after much patience, persistence, and perspiration when things become clearer. Thirdly, the data extracted from the two on-line databases was sometimes found to be incomplete. Often, and particularly with long articles, only summaries and not the full text was available. This was particularly disappointing given that longer articles are usually the most informative and, therefore, potentially of most use. I estimate that about 10 per cent of the data extracted was shortened in this way. Also, any graphs related to an article are not reproduced in computerised form within the databases. Once again, valuable information is lost. Fourthly, collecting data from on-line databases is expensive. Fortunately, my access to Reuters Textline was at a preferential rate (used for promotional purposes) and my access to Predicasts PROMT was free of charge owing to the fact that it was on a limited trial period. Without these two contingencies our runs (which amounted to well over 100 hours of on-line time) would have cost many thousands of pounds sterling.

Let us now turn to the fifth and more fundamental problem with the overall approach of this study. Grounded theory research requires certain qualities of the researcher. In particular, confidence, creativity and experience (both of doing research and of the context(s) being researched) are of great benefit. Accordingly, the approach does not favour the novice researcher who may be just beginning to develop these qualities. This is not to say that novice researchers should not embark upon grounded theory studies; rather, I imply that (a) they are likely to find the approach more difficult than more conventional methodologies; and, (b) the more experienced (probably postdoctoral) researcher is likely to produce better theory.


Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay


More from UK Essays