Research is an integrated process consisting of many steps including data analysis and results reporting. Without a clear and accurate results section the report will not be as effective as it would be no matter how well the design was or how important the topic is. A poor or ineffective results section can result in either readers not paying attention to your report at all or getting wrong conclusions from it. In this assignment you need to provide a 1500 word (plus or minus 10%) critique of the results section of a journal article. You can use an article that you reviewed in Assignment 1 or pick a new one. Please attach the journal article to the assignment.
When critiquing a results section you should consider the points below:
Excluded participants: Were any participants excluded from the analyses and if so why? Did the researchers justify any exclusions appropriately? For a good discussion on the reasons to exclude outliers, see Osborne and Overbay (2004).
Missing data: If participants leave questions or items blank, we end up with what we call missing data. There are various different methods of dealing with missing data (Schafer & Graham, 2002). Did the researchers choose the most appropriate method?
Validity and reliability of dependent variables: Did the researchers provide convincing evidence for the validity of each of the dependent variables that they used (including psychometric scales)? In other words, did each dependent variable show significant and appropriately sized correlations with the variables that it was supposed to be related to (convergent validity) and, equally importantly, weak nonsignificant relationships with the variables that it was not supposed to be related to (discriminant validity)? Also, was there good evidence of the internal reliability of the dependent variables? For example, did each psychometric scale have a suitable factor structure and/or acceptable Cronbach alpha coefficients (> .70)?
Sufficient statistical power: If researchers find a significant effect, then, ipso facto, they must have had sufficient statistical power to detect this effect. Consequently, it would be inappropriate to criticise the researchers for have low statistical power due to small sample size even if the researchers' sample size is smaller than that used in previous research. However, if the researchers found null findings, then this can either be interpreted as indicating that there is no effect present or that an effect is present but the researchers had insufficient statistical power to detect this effect (i.e., a Type II error; see Cohen, 1988, 1992). Hence, statistical power is a critical concern when interpreting null findings. When interpreting a null finding, consider whether the research contained enough participants to detect the effect. Look back at previous research that has found the effect in order to see how many participants were used in that research. Meta-analyses and other reviews are good sources for this information. Does the research use significantly fewer participants than previous successful research? If so, then the null findings may be due to a lack of statistical power. Faul, Erdfelder, Lang, and Buchner (2007) provide a free downloadable power analysis software that you can use to investigate whether researchers have sufficient power. It is available at: http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/download-and-register In addition, Maxwell (2004) provides some useful calculations regarding recommended sample sizes. Assume that researchers want to conduct a statistical test with Cohen's (1992) recommended power of .80 to detect a medium-sized effect using an alpha value of .05 and with equal numbers of participants in each condition. If the researchers are using a 2 x 2 between-subjects ANOVA and a single dependent variable, then, in order to detect a single, prespecified effect (e.g., a main effect), the researchers should use 30 participants in each of the four cells of the 2 x 2 design (i.e., 120 participants). In order to detect all three effects (i.e., both main effects and the interaction), the researchers should use 48 participants in each cell (i.e., 192 participants). Obviously, cell sizes will need to be larger if (a) cell sizes are unequal, (b) the ANOVA is larger (e.g., 2 x 3 ANOVA), or (c) there is more than one dependent variable.
Statistical assumptions: Did the researchers meet all of the assumptions that are associated with the particular statistical tests that they used (e.g., equal cell sizes, normal distribution, homogeneity of variance).
Correct use of inferential statistics: All statistical techniques have their limitations. Did the researchers take these limitations into account. Have a look at some general introductions to the techniques of exploratory factor analysis (Floyd & Widaman, 1995; Russell, 2002), path analysis (Stage, Nora, & Carter, 2004), or structural equation modelling and confirmatory factor analysis (MacCallum & Austin, 2000; Schrieber, Stage, King, Nora, & Barlow, 2006) correctly? Was their dichotomization of quantitative variables appropriate (MacCallum, Zhang, Preacher, & Rucker, 2002; Maxwell & Delaney, 1993)?
Correct interpretation of analyses: Did the researchers interpret the results correctly? Look back at the precise predictions that the researchers made and match them against the actual pattern of results. Researchers are like politicians: They will try to place a positive spin on their results, emphasize supportive evidence, and downplay unsupportive evidence. As a critical analyst, it's your job to see through the rhetoric and spin and analyze the cold hard facts!
Alternative analyses: Different statistical tests can be used to address different questions. However, different statistical tests can also be used to address the same question. Did the researchers use the correct (i.e., most powerful, most precise) statistical test to investigate their hypotheses? Were there any alternative, more appropriate statistical analyses that could have been used to test the researchers' hypotheses?