Research is the key contributor of professional development in many professions, particularly healthcare. It allows practitioners to inform, adjust and monitor particular ways of practice or issues. The ability to evaluate research evidence appropriately is essential to avoid the assumption that all published research is of equal merit and validity. In order to critically appraise the article, ‘Clinical handover in the trauma setting: a qualitative study of paramedics and trauma team members’ (Evans, Murray, Patrick, Fitzgerald, Smith & Cameron, 2010), the ten point CASP (Critical Appraisal Skills Programme, 2006) framework is used. Current literature will be utilised to guide the discussion and reflection in order to conclude the overall strength of this article.
The study’s aims are concisely stated and numbered clearly. This is of importance in research keeping the main focus well established and succinct, allowing readers to easily understand the objectives (Collins, 2010; Gerrish & Lacey, 2010; Stommel & Wills, 2004). and reduce the chance of inter-researcher confusion. The authors discussed their reasons for focussing on clinical handovers, highlighting that communication difficulties can lead to serious, even fatal mistakes. A previous study is used to confirm the severity of this problem. Khan (2008) illustrates the benefits of using previous studies in one’s background enforce the discussion of why the research is relevant. The researchers discuss the MIST (Mechanism-Injuries-Signs-Treatment) template used in the military, pointing out that little is known about its effectiveness in more general settings. Other ways of improving communication are discussed, such as radio pre-alerts and the use of electronic tablets, noting that as yet effectiveness is unproven, supporting the value of this new research.
The researchers have not disclosed reasons for choosing qualitative methodology; however, it is appropriate to their aims. They intended to obtain and illuminate personal views and subjective experiences of using the MIST template during handover, in turn modifying the template in response to recurring findings. The stated aim of understanding issues affecting handover efficiency can be best researched via the qualitative method in order to gain participants’ interpretations of the other professionals involved and to enlighten the interactions between these groups (Block, 2006). The gathering of opinions on data transmission methods and data display within the emergency department (ED) provides a more in-depth understanding of how errors occur and furthermore, how this problem could be improved (Bowling & Ebrahim, 2005).
The research design used in this study is grounded theory. This specifically enables a new theory to arise from data in order to explain social phenomenon and human behaviour (Chears, 2009; Williams, 2012). It could be argued that the researchers are not developing a new theory in relation to the improvement of the ‘minimum dataset’ as they merely adapt the MIST template (an already established theory), whereas typically, grounded theory forms a theory from original data collected during a study, not by testing a theory from previous literature in the field (Giles, 2002). In contrast, Henandez (2011) recommends using datasets from previous research in order to collate secondary and primary research, allowing the combination of ideas to form a theory. Furthermore, the findings seem to show that grounded theory has been used within ‘attributes of an affective and ineffective handover’: A set of ideas designed to explain what constitutes handover quality has been developed. An example: 11 out of the 17 participants expressed that a poor handover includes extraneous information and interruptions. The researchers developed this as a theory to explain why poor handovers may occur. They have described their use of grounded theory but not clarified why they have used this method or with what aims they wished to develop new theories.
In relation to the recruitment strategy, Daymon & Holloway (2011) illuminate the importance of disclosing the setting, timeframe and people involved in research in order to clarify the boundaries of the study. Here, details of inclusion of participants are thorough and well recorded, including geographical locations, timeframes and demographics. ‘Table 1’ shows all participants had a mean of 5 years post-graduate experience, indicating that those with considerable experience were selected. The researchers used purposive sampling method to find a representative group (participants with experience of transporting trauma patients to a trauma service) and used convenience sampling within this representative group to ensure all participants could contribute to the data (Monsen & Horn, 2008). Purposive sampling is highly targeted and forms a specific group, resulting in this method being subject to bias, however, if the researchers are aiming to investigate a phenomenon relating to specific groups of people (e.g. paramedics and trauma team members), then purposive sampling is appropriate to warrant the correct target group is selected (Newell & Burnard, 2011). In addition, the researchers have explained how they have selected a representative trauma team sample; by involving individuals from different specialty groups involved in management of trauma patients (e.g. burns, anaesthetics). There is no record of anyone who chose not to take part.
Data collected via semi-structured interviews was an appropriate way to address the research aims, but there are flaws. Researchers needed to record the subjective experiences of this group of professionals in order to determine the key factors causing communication errors. However, information such as the location of interviews is not stated. Japec (2008) points out how the social context of interviews can affect responses. We do not know whether interviews were carried out in a controlled environment (i.e. in the same room, similar time of day and without disturbances). Moreover, the researchers have not disclosed the reasons why the interview method was chosen. Moniff and Whitehead (2010) and Blaikie (2010) illustrate the importance of including this information, to allow the reader to see the relevance of chosen data collection techniques. Furthermore, there is no description of how the data was recorded; a vital element in research to increase confirmability and replicability (Gerrish & Lacey 2010). The form of data cannot be distinguished through the software used (NVIVO 8.0) as it can upload text, videos and tape recordings (Edhlund, 2007).
The use of a minimum topic guide for interviews ensures that similar data is collected from participants and eliminates the sequence of questions being different (Holloway & Wheeler, 2010). However, the researchers were ambiguous when discussing their use of the topic guide in the study design. They state it was used when interviewing the trauma team speciality groups about the minimum dataset for handover, yet, do not clearly state whether it was used for questions on effective and ineffective handovers, data transmission or data display (even though these prompts appear in the topic guide). There is also no reference to use of the topic guide during interviewing paramedics (although paramedic specific questions are present on the guide leading to assumption that the guide was used).
The interview method enhances data as it shows body language and non-verbal interactions (Gerrish & Lacey, 2010). However, it can skew results: social desirability can lead to the participant answering a question so as to please the researcher or sound like a ‘good practitioner’ (Rubin & Babbie, 2010). In addition, the ‘interviewer effect’ may occur (where interviewers subtly influence participants responses through wording of the questions or body language), especially as some researchers belonged to the professions being studied.
In this research, the relationship between researchers and participants has not been sufficiently reflected upon. There has been no consideration of how reflexivity and experimenter bias may have influenced the choice of questions (e.g. the topic guide), the sample selection and location. With no comment on who developed the topic guide, we do not know if it was one researcher or a collaboration. This information is essential: if experimenter triangulation was utilised, this would decrease the chance of experimenter bias and reflexivity skewing the questions, increasing credibility of the research (Merrian, 2009). Researchers have not examined their role within sample recruitment either. One researcher is employed by Ambulance Victoria, and another by the Alfred Hospital – Presumably personal backgrounds influenced the choice of these two institutions for sample collection, however, there is no personal reflexivity expressed to determine that they have considered their potential bias in this area. Reflexivity is critical in order to increase the rigor of the research; recognising how personal experiences, disposition and emotions can influence research choices allows researchers to compensate for this where possible, and understand the importance of documenting subjective issues (Kirby, Greaves & Reid, 2006).
In consideration of ethical issues, the researchers have not discussed how, or even if they briefed and debriefed the participants. Adequate briefing is essential do participants know exactly what to expect and are aware of their rights (Fowler, O’Neill & Helvert, 2011). Briefing also allows participants to give informed consent and avoids passive deception, whereby the researchers deceive participants by omission (Cottrell & McKenzie, 2011). Debriefing is equally important, allowing participants to raise any issues experienced during the research (Jackson, 2011; Morrow, Boaz, Brearley & Ross, 2012). An ethics committee has approved this research, meaning the emotional impacts on researchers and participants have been assessed and the safeguards and well-being of the participants have been evaluated (Holloway & Wheeler, 2010). This implies that ethical considerations have adequately been taken into account, although more detail should have been provided.
With respect to the data analysis, grounded theory is defined and clearly explained. In the abstract, thematic analysis was said to be used, but there is no reference to it in the data analysis section. There is, however a clear demonstration of its stages in the description of how the 3 nodes were developed through collecting reoccurring responses and developing them into codes and themes. Open coding has been used, which primarily allows codes and subsequently themes to emerge from the text alone; by using axial coding in addition, the researchers’ concepts and categories are implemented whilst re-reading the text in order to check that categories truthfully represent responses and to examine how concepts are related, increasing credibility and validity (Babbie, 2012).
However, researchers have not explained how they collaborated to determine what data to present, nor do they disclose any outliers or contradicting results. Reflexivity can affect this process as their subjective thoughts may influence their choices. The researchers have not considered this issue, resulting in reduced credibility (Brink, 2006). They refer to a ‘general consensus’ when talking about the usability of MIST – rather vague and ignoring differing responses. Nevertheless, sufficient data is presented to support the findings. Quotes are implemented to support the data and MIST is rewritten and displayed, including responses. The attribute box allows readers to distinguish differing views between professionals; adding credibility to the data transmission results. Finally, by assigning a third researcher, experimenter bias is reduced and to some degree helps the issue of reflexivity, as researchers’ past experiences will all differ, affecting how they may perceive participants’ answers, and subsequently code the text. Triangulation of researchers adds rigor to the research (Inoue, 2012).
There is a relatively clear statement of findings in the research; data is affirmed under primary nodes developed by thematic analysis clearly and concisely and they are also logically discussed in the order of aims. Hinshaw (2011) emphasises the importance of clearly presented results to allow the reader easy access to the main outcomes and suggested action points. Specific quotes arising from the interviews to corroborate and increase dependability of the results (Streubert & Carpenter, 2011). Discussions are made for and against the researchers’ suggestions. Supporting discussions include the concept of ‘time out’ in theatres where team members pause and complete a checklist to ensure safety (this current practice boasts reduction in surgical error), reinforcing the results where paramedics state an effective handover is one where the receiving body stop and exercise listening skills. An example where researchers challenge their findings is the reference to a study, concluding information recall of just 36% even when paramedics were provided with handover training. Using evidence to challenge their research demonstrates that the researchers are not prejudiced in favour of their own results and that they recognise the need to consider additional barriers (Brink, 2006).
However, the researchers have not discussed the credibility of their findings; they have employed researcher triangulation to reduce intrinsic biases, but have not mentioned how this improves credibility and rigor in the discussion. There is no comment about respondent validation, raising the question whether this was carried out. It is a of checking the truthfulness of research, by giving participants the findings to comment on if there is any misinterpretation: an effective way of reducing researcher bias (Pope & Mays, 2006).
The value of this research is limited: the researchers acknowledge their results form a basis for development and recognise that trails and further research must be carried out. Findings are discussed in light of current practice, recognising that the MIST tool needs to be trialled further. Researchers also acknowledge that training will have to be developed for paramedics (as with any new proposed method in ambulance services). It is stated that noise barriers need to be evaluated, however, no new areas of research are proposed. Researchers have not discussed whether their research can be transferred to other populations. They have briefly discussed generalizability, stating the research should be generalised to other hospitals with caution as the data was collected in a busy referral hospital. They also recognise that selecting paramedics with experience in trauma settings can lead to decreased generalizability, as paramedics with less trauma experience may have differing views.
In conclusion, this research boasts excellent presentation and structure comprising strong background and aims. Utilising qualitative methodology enabled researchers to gain subjective experiences and views from healthcare professionals to provide a deeper understanding of how communication errors occur during handover and propose a multifactorial strategy for improvement. It could be argued that the researchers are not developing a new theory, therefore not utilising grounded theory correctly, however, there is ample literature disputing this, stating that it is acceptable to build on existing theory in order to develop a new one. The research has been carried out in the light of the original aims throughout and the results are clearly presented, with additional quotes to enforce points. However, there is minimal consideration of potential biases and the effect of reflexivity, reducing credibility. Furthermore, many methods have not been justified (research method, use of grounded theory, data collection and data presentation) leading to overall low rigor and credibility. Finally, it is recognised that this research will not change future practice without further research and trailing. Nevertheless, the original data collected here and strategies for improvement presented make this research a valuable contribution to the field. (182)
Babbie, E. (2012). The practice of social research. (13th ed.). Wadsworth: Cengage Learning.
Blaikie, N. (2010). Designing social research. (2nd ed.). Cambridge: Polity Press.
Block, D. (2006). Healthcare outcomes management: strategies for planning and evaluation. London: Jones and Bartlett Publishers.
Bowling, A. & Ebrahim, S. (2005). Handbook of health research methods: investigation, measurement and analysis. Berkshire: Open University Press.
Brink, H. (2006). Fundamentals for research methodology for health care professionals. (2nd ed). Cape Town: Juta & Co.
Chears, V. (2009). Taking a strand for others: a grounded theory. USA: ProQuest LLC.
Collins, H. (2010). Creative research: the theory and practice of research for the creative industries. London: AVA Publishing.
Cottrell, R. & McKenzie, J. (2011). Health promotion & education research methods: using the five-chapter thesis/dissertation model. (2nd ed.). London: Jones and Bartlett Publishers.
Critical Appraisals Skills Programme (CASP). (2006). Qualitative research: appraisal tool. 10 questions to help you make sense of qualitative research. Oxford: Public Health Resource Unit.
Daymon, C. & Holloway, I. (2011). Qualitative research methods in public relations and marketing communications. (2nd ed.). Oxfordshire: Routledge.
Edhlund, B. (2007). NVivo essentials: the ultimate help when you work with qualitative analysis. Stallarholmen: Form & Kunskap.
Evans, S., Murray, A., Patrick, I., Fitzgerald, M., Smith, S. & Cameron, P. (2010). Clinical handover in the trauma setting: a qualitative study of paramedics and trauma team members. BMJ: Quality and Safety Health Care, 19(6), 1-6.
Fowler, C., O’Neill, L. & Helvert, J. (2011). The handboom of emergent technologies in social research. New York: Oxford University Press.
Gerrish, K. & Lacey, A. (2010). The research process in nursing. (6th ed.). Sussex: Blackwell Publishing.
Giles, D. (2002). Advanced research methods in psychology. Sussex: Routledge.
Henandez, C. (2011). Grounded theory: the philosophy method, and work of barney glaser. USA: Brown Walker Press.
Hinshaw, A. (2011). Shaping health policy through nursing research. New York: Springer Publishing.
Inoue, A. (2012). Writing studies research in practice: methods and methodologies. USA: Southern Illinois University Press.
Jackson, S. (2011). Research methods: a modular approach. (2nd ed.). Wadsworth: Cengage Learning.
Japec, L. (2008). Advances in telephone survey methodology. New Jersey: John Wiley & Sons.
Khan, J. (2008). Research methodology. New Delhi: APH Publishing.
Kirby, S., Greaves, L. & Reid, C. (2006). Experience research social change: methods beyond the mainstream. (2nd ed.). Ontario: Library and Archives Canada Cataloguing in Publication.
Merrian, S. (2009). Qualitative research: a guide to design and implementation. San Francisco: Jossey-Bass.
Moniff, J. & Whitehead, J. (2010). You and your action research project. (3rd ed.). Oxfordshire: Routeledge.
Monsen, E. & Horn, L. (2008). Research: successful approaches. (3rd ed.). USA: Diana Faulhaber.
Morrow, E., Boaz, A., Brearley, S. & Ross, F. (2012). Handbook of service user involvement in nursing & healthcare research. Sussex: John Wiley & Sons.
Newell, R. & Burnard, P. (2011). Research for evidence based practice in healthcare. (2nd ed.). Sussex: John Wiley & Sons.
Pope, C. & Mays, N. (2006). Qualitative research in healthcare. (3rd ed.). Oxford: Blackwell Publishing.
Rubin, A. & Babbie, E. (2010). Essential research methods for social work. (2nd ed.). Belmont: Cengage Learning.
Stommel, M. & Wills, C. (2004). Clinical research: concepts and principles for advanced practice nurses. Philadelphia: Lippincott, Williams & Wilkins.
Streubert, H. & Carpenter, D. (2011). Qualitative research in nursing. (5th ed.). Philadelphia: Lippincott, Williams & Wilkins.
Williams, J. (2012). The paramedics guide to research: an introduction. Berkshire: Open University Press.
Cite This Work
To export a reference to this article please select a referencing style below: