Study Replication: Relationship between Sleep Duration and Axiety/Depression

2759 words (11 pages) Essay in Psychology

23/09/19 Psychology Reference this

Disclaimer: This work has been submitted by a student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

The Replication Crisis

 

Abstract

 

Replication relates to the duplication of research, to see if findings can be generalised across situations and time (Diener, 2019) The Replication crisis is the lack of consistent findings being produced and there are many replication studies that show these unconcordant findings. This report aims to replicate the study by Becker (2018); it explores the relationship between the duration of sleep and anxiety and the duration of sleep with depression. The sample consisted of undergraduate psychology students (N=103) who were asked to complete a standardised self-report questionnaire assessing anxiety, depression and the Pittsburgh Sleep Quality Index (PSQI). The results found a significant negative correlation between sleep duration and anxiety, however there was no significant correlation found between sleep duration and depression. The results support the replication crisis, as they do not concord with Becker’s conclusions. They also broaden other issues which may affect replicability, including cultural and gender biases.

 

The Replication Crisis

Previous psychological research poses two different forms of replication; differentiated into conceptual and direct replication (Pashler & Harris, 2012). A direct replication aims to replicate the original method of a study as much as possible, although it can include small alterations. Functionally, the replication can remain the same if it uses the same scales, materials and statistical tests. etc. It is designed to generalise across the same variations as the original study or to vary aspects, consistent with the theory leading the study (Simons. D.J. 2019). Direct replication tests whether we can obtain the same results from the same task. A conceptual replication attempts to scientifically copy a previous hypothesis to find whether results generalize to different samples, times, or situations. (Shrout, P & Rodgers, J.L, 2018)

Psychological researchers rarely conduct direct replication attempts as it is very difficult to reproduce studies precisely. However, researchers frequently attempt conceptual replications, as they are considered more reliable for assessing the importance of findings by testing validity and generalisability (Carpenter, 2012). Success from a conceptual replication is more publishable than the success from a direct replication. Although more publishable; conceptual replications expose data to publication bias, where the eye-catching titles subsequently discard useful insignificant findings into the file drawer. 

Psychological knowledge develops through the empirical observation and data testing of statistical hypotheses. One, would expect statistically significant findings to be easily replicated with new data and settings, but in practice many findings have replicated less often than expected, leading to claims of a replication crisis. Studies are kept relevant within the psychological community through small claims of positive results through time: these maintain the same concept as the original studies but often have completely different designs.  (Schmidt, F.L., Oh, I. S., 2016). This is what can be understood as the conceptual replication.

The Replication Crisis itself can also be split into three viewpoints. The first involves scientific fraud and manipulated data for research publications. The evidence points to more recent work by Stapel .D (Bhattacharjee 2013), as well as  Sanna.L and Hauser.M who were prominent in the social and cognitive psychology spheres (Wade 2010). The second, was the publishing of psychological reports in which Simmons et al (2011,) disparage questionable practices in research. This resulted in an overblown rise in type 1 errors in psychological studies (Diener, 2019). The third included the Open Science Collaboration systematic sample of three highly regarded journals in psychology (Nosek.B., 2015).  They aimed to replicate 100 results and found that only 36% of the findings in the replication were significant and once the original findings were combined with new data, 32% were no longer significant. Furthermore, the effect sizes were halved in the replication, and the replication failures were due to some of the original study features. (e.g., studies with surprising findings were more likely to be subject to replication failures than studies with intuitive findings).

In this exploration of Becker’s (2018) study, the aim of this report is to replicate the significant positive relationships between sleep duration and anxiety, and sleep duration and depression. According to Becker (2018) anxiety was a better indicator for disturbances in sleep duration than depression and he suggests that among American students, there is a regularity of poor sleep and distinct mental health symptoms that accompany poorer sleep.

Two alternative hypotheses were introduced in order to conduct the replication.

The psychological research on the replication crisis distinguish

H1 = There is a relationship between sleep duration and anxiety.

H2= There is a relationship between sleep duration and depression.

Therefore, we can expect to see a positive correlation for both: .08 for anxiety and .14 for depression.

 

Method

Participants

The sample consisted of 103 undergraduate students at the University of Strathclyde. Two participants did not consent to the study, so due to privacy and confidentiality, their results will not be included. 101 participants did consent. There were 84 (83%) female participants and 16 (16%) male and 1 (1%) preferred not to say. A-priori sample size calculated with G*Power was 167 for a one -tailed test. The effect size was set to 0.19 (small used in Becker’s 2018 study) with an alpha error probability of 0.05. The level of power was also set to 0.80.

Design

The experimental design used in this study was correlational, therefore there were no dependent or independent variables known. One variable measured the duration of sleep (Q4) and one variable measured the scores of anxiety and depression. Data was collected using self-report Likert scale questionnaires and measured on an interval scale .

Materials

A standardised Pittsburgh Sleep Quality Index designed by Buysse et al (1989) was used in to assess sleep. The questionnaire was measured on a 4-item Likert scale which ranged from ‘not in the past month’ to ‘Three or more times a week’. Some questionnaire statements included ‘ Cannot get to sleep within 30 minutes’ and ‘wake up in the middle of the night or early morning’. Sleep duration was measured as a numerical interval response to the question ‘During the past month, how many hours of actual sleep did you get at night? (This may be different than the number of hours you spend in bed.)’. Only Question four addressed the desired variable of ‘sleep duration’ which was measured using nominal data e.g. ‘8 hours’.

Anxiety and Depression were measured by the Depression Anxiety and Stress Scales developed by Henry & Crawford (2005). The questionnaire consisted of fourteen questions, seven which measured Anxiety and seven measuring Depression e.g.‘ I couldn’t seem to experience any positive feeling at all’. This item was also measured on a four-item scale ranging from ‘Did not apply to me at all’ to ‘Applied to me very much or most of the time’. Both showed good internal reliability with Cronbach’s alpha: Anxiety= .81 and Depression= .91. The scores from all the measures reflected the participants duration of sleep and levels of Anxiety and Depression at time of measurement.

Procedure

Participants took part in the research by completing the questionnaires individually online. Responses were anonymous, though gender and age were recorded. The questionnaire related to sleep duration was completed first, after which the participants immediately completed the questionnaire related to anxiety and depression. Ethical issues were accounted for by emphasis on the voluntary and anonymous nature of the study. A full online debrief was given to participants who completed the study along with support and points of contact if in need of counselling assistance.

Results

The results are based on the data provided by 103 participants, however two did not consent and one only answered one question on gender within the data set. Concerning Anxiety and Depression, there was a significant negative correlation between sleep duration and anxiety and a non-significant neutral correlation between sleep duration and depression. When the data was tested for normality and anomalous results, two outliers were found for Q4. These were resolved by creating Z scores for ‘mean_anxiety’, ‘mean_depression’ and ‘mean_sleepduration’. Any z-scores less than -3.29 or greater than +3.29 were removed. The means and standard deviations are presented in Table 1.

Variable

Mean

Std. Deviation

Sleep

7.65

1.08

Anxiety

1.64

0.56

Depression

1.82

0.68

.

Table 1.) Means (and Standard Deviations) for Sleep Duration, Anxiety and Depression.

Table 2.) Significant negative correlation between Sleep duration and Anxiety (with Line of best fit).

Table 3.) Non-significant correlation between sleep duration and depression.

Higher mean scores for depression suggest higher prevalence for depression than anxiety. A negative relationship was detected from figure 1, in the downward sloping linear pattern. From the scatterplot it would appear there is no correlation between depression and sleep duration and the hypotheses seem to be supportive, only of a relationship between mean anxiety and sleep duration.

Spearman’s correlation coefficient was used for analyses since the data met non-parametric requirements,(skewed data). Spearman’s correlation revealed a significant positive relationship between mean anxiety and sleep duration, r=-.17, n=94, p<.05(one tailed), r2 = -0.03. Which support Becker’s (2018) conclusions that the two items are related. Spearman’s correlation revealed a non-significant correlation between mean depression and sleep duration r=-.13, n=92, r2= -0.17. This refutes Becker’s (2018) findings which suggested they should also have a significant positive relationship.

Discussion

Key: S-D=Sleep duration, A=Anxiety, D=Depression.

This report found results that clearly refute Becker’s (2018) claims. This report, addressed the set variables of S-D, A and D in a small participant sample at one university. In Becker’s (2018) study, he used a multi-university sample of 7,626 participants. The objective of this report was to see if the significant positive relationships between S-D and A, and S-D and D could be replicated. Becker’s (2018) objectives were to 1.) describe sleep problems, 2.) evaluate sex differences and 3.) examine the specific associative symptoms of mental health. This report cannot be described as a direct replication as it does not aim to seek the same overall findings as Becker.

 Furthermore, age and gender are other factors that deviate from Becker’s study. In this report the age ranged from 18-60, whereas Becker’s study age ranged from 18-29. This poses the issue of individual differences,another factor that can skew results.Becker’s study consisted of 70% female, 30% male, and of those, 81% were white and 91% were non-hispanic. In this replication the sample consisted of 83% female and 16% male participants, it also did not supply a question based on ethnicity. These factors highlight the lack of detail of this report, which make it difficult to clearly compare the information.

Another issue is volunteer samples, where participants choose to become part of a study (Coolican,H.2014).This means the researcher cannot choose an accurate target sample, making it difficult to replicate the findings. As volunteers are eager to assist, they may produce demand characteristics, changing their natural behaviour in line with their perception of the study aim. This can make the results unreliable and irrelevant.

Another factor to acknowledge, for reliability, validity and generalisability is cross-cultural variation. Becker’s original study was conducted in the United States, this study was conducted in the United Kingdom. Both samples are from Western Cultures, but the way each country classify anxiety and depressive symptoms are different. For example, the U.K uses the Diagnostic and Statistical Manual for Mental disorders (DSM) and the USA use the International Classification of diseases (ICD). The A and D scales used in the questionnaire were based on information from the British Journal of Clinical Psychology, so therefore may not extrapolate to the USA.

As initially suggested by Diener (2019), there are solutions to this ‘crisis’ of replication. Sources like Open science framework and the APS Perspectives on Psychological science report research replications (Diener ,2019). The emergence of these sources is encouraging and increase the likelihood that replication attempts will be published. It poses a direction for psychological development that will enable researchers to increase the reliability and validity of both significant and insignificant findings in their field. For effective research studies and consequently replication, researchers should extend awareness to their data, sources, methodology, peer review, exclusive resources and open access.

It is important to be aware that many direct replication studies have been successful. Support from studies like Milgram’s obedience replication produced consistent findings. Awareness of ethical issues which may also affect results is useful, for example the learnt electric shock helplessness study would not be suitable to replicate today. This could also point to why conceptual rather than direct replications occur and why researchers are puzzled with variated findings. 

By adopting open science conventions, fully disclosing replication efforts and basing findings and conclusions on multiple studies rather than on a single replication; it means more reliable replications may occur. (Colling, L., & Szucs, D., 2018). Researchers need to develop understanding of the influences that affect effect sizes, power analyses, and the disclosure methods for all statistical findings. This may enable other researchers to come closer to replicating findings and easing psychological societies stern methodological brow.

References

  • Bhattacharjee Y. (2013). The mind of a con man. The New York Times Magazine, Apr. 2
  • Becker, S. P., Jarrett, M. A., Luebbe, A. M., Garner, A. A., Burns, G. L., & Kofler, M. J. (2018). Sleep in a large, multi-university sample of college students: Sleep problem prevalence, sex differences, and mental health correlates. Sleep Health, 4(2), 174-181.
  • Buysse, D. J., Reynolds III, C. F., Monk, T. H., Berman, S. R., & Kupfer, D. J. (1989). The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Research,28(2), 193-213.
  • Carpenter, S. (2012). Psychology’s bold initiative: In an unusual attempt at scientific self-examination, psychology researchers are scrutinizing their field’s reproducibility. Science, 335, 1558–1560.
  • Coolican, H.,(2014). Research Methods and Statistics in Psychology. London. London: Psychology Press.
  • Colling, L., & Szűcs, D. (2018). Statistical Inference and the Replication Crisis. Review of Philosophy and Psychology.
  • Diener, E. &Biswas-Diener, R. (2019). The replication crisis in psychology. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign.
  • Henry, J. D., & Crawford, J. R. (2005). The short‐form version of the Depression Anxiety Stress Scales (DASS‐21): Construct validity and normative data in a large non‐clinical sample. British Journal of Clinical Psychology, 44(2), 227-239.
  • Pashler, H., & Harris, C. R. (2012). Is the Replicability Crisis Overblown? Three Arguments Examined. Perspectives on Psychological Science, 7(6), 531–536.
  • Rousseau, D. L., & Porto, S. P. S. (1970). Polywater: Polymer or artefact. Science, 167(3926), 1715–1719.
  • Schmidt, F. L., & Oh, I.-S. (2016). The crisis of confidence in research findings in psychology: Is lack of replication the real problem? Or is it something else? Archives of Scientific Psychology, 4(1), 32-37.
  • Simmons J, NelsonL, Simonsohn U. (2011) .False positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22(11):1359–66
  • Simons.D.J. (2014). The Value of Direct Replication. Perspectives on psychological Science, 9 (1), 76-80.
  • Shrout, P., & Rodgers, J. L. (2018). Psychology, Science, and Knowledge Construction: Broadening Perspectives from the Replication Crisis. Annual Review of Psychology, 69, 487-510.
  • Taubes, G. (1993). Bad science: The short life and weird times of cold fusion. New York, NY: Random House.
  • Wade N. (2010). Inquiry on Harvard lab threatens ripple effect. The New York Times.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: