This is an excerpt from Research Methods in Physical Activity-8th Edition by Jerry R. Thomas,Philip E. Martin,Jennifer L. Etnier & Stephen J. Silverman.
After your proposal meeting, you collect the data to evaluate the hypotheses that you proposed. Of course, you follow the methods that you specified in your proposal carefully and consult with your major professor if problems arise or changes need to be made. After the data are collected, you complete the agreed-upon analysis and discuss the outcomes with your major professor (and possibly some committee members, particularly if a member has statistical expertise). Then you are ready to write the results and discussion to complete your research.
Results and Discussion
The final two sections of a thesis or dissertation are the results and discussion. Results are what you have found, and the discussion explains what the results mean. More often than not, sections are separate, although they are sometimes combined (particularly in multiple-experiment papers).
How to Write the Results Section
The results section is the most important part of the research report. The introduction and literature review indicate why you conducted the research, the methods section explains how you did it, and the results section presents your contribution to knowledge—in other words, your findings. The results should be concise and effectively organized and include appropriate tables and figures.
There are several ways to organize the results section. The best approach may be to address each of the tested hypotheses; on other occasions, organizing the results around the independent or dependent variables of interest may present itself as a better option. Occasionally, you may need to document that participants adhered to specified procedures. For example, in a field-based exercise intervention study, you need data that documents adherence to the exercise prescription before addressing other outcomes. In addition, before discussing other findings, you may want to show the previously replicated standard and expected effects. For example, in developmental studies of motor performance tasks, older children typically perform better than younger ones. You may want to report the replication of this effect before discussing the other results. When looking at the effects of training on several dependent variables, you may first want to establish that a standard dependent variable known to respond to training did, in fact, respond. For example, before looking at the effects of cardiorespiratory training as potentially reducing cognitive stress, you need to show that a change occurred in cardiorespiratory response as a result of the training.
Some items should always be reported in the results. The means and standard deviations for all dependent variables under the important conditions should be included. These are basic descriptive data that allow other researchers to evaluate your findings. Sometimes only the means and standard deviations of important findings are included in the results, but all the remaining means and standard deviations should be included in the appendix.
The results section should also feature tables and figures that display appropriate findings. Accompanying each table and figure should be a meaningful title or caption that doesn’t simply note what data are being presented, but rather, highlights important outcomes. Figures are particularly useful for percentage data, interactions, time-based variables, and summaries of related findings. When using figures representing group effects, always present an estimate of variability with the mean data—either standard deviations or confidence intervals are appropriate. Only the important tables and figures should be included in the results; those remaining should be placed in the appendix, if they are included at all.
Statistical information should be summarized in the text where possible. Statistics from ANOVA and MANOVA should always be summarized in the text, and complete tables should be relegated to the appendix. Make sure, however, that you include the appropriate statistical information in the text. For example, when giving the F ratio, report the degrees of freedom, the probability, and an estimate of the effect size: F(l, 36) = 6.23, p < .02, ES = 0.65. Above all, the statistics reported should be meaningful. Gastel and Day (2016, p. 73) reported a classic case that read “33 1/3% of the mice used in this experiment were cured by the test drug; 33 1/3% of the test population were unaffected by the drug and remained in a moribund condition; the third mouse got away.”
Sometimes a better way to present this information is in a table. If the perfect scientific paper is ever written, it will read, “Results are in table 1.” However, this suggestion does not mean that the results section should consist mostly of tables and figures. Having to thumb through eight tables and figures placed between two pages of text is disconcerting. But even worse is having to turn 50 pages to the appendix to find a necessary table or figure. Read what you have written. Are all the important facts there? Have you provided more information than the reader can absorb? Is some of the information peripheral to the questions or hypotheses guiding the study? If the answer to any of these questions is “yes,” you should revise the results section.
Do not be redundant and repetitive. A common error is to include a table or figure in the results and then repeat it in the text, or to repeat data from a table in a figure. Describing tables and figures in a general way or pointing out particularly important outcomes is appropriate, but do not repeat every finding. Also, be sure that you do not call tables figures, and vice versa. As Gastel and Day (2016, p. 74) reported, some writers are so concerned with reducing verbiage that they lose track of antecedents, particularly for the pronoun it:
“The left leg became numb at times and she walked it off. . . . On her second day, the knee was better, and on the third day it had completely disappeared.” The antecedent for both its was presumably “the numbness,” but I rather think that the wording in both instances was a result of dumbness.
Reporting Statistical Data
A consistent dilemma among researchers, statisticians, and journal editors concerns the appropriate reporting of statistical information for published research papers (e.g., Fritz & Morris, 2012; Nuzzo, 2014; Wasserman & Lazar, 2016). In recent years, some progress has been made that involves two issues in particular: (a) reporting some estimate of the size and meaningfulness of the finding and (b) the reliability or significance of the finding. Two organizations of importance to our field—the American Physiological Society and the American Psychological Association (2020)—have now published guidelines regarding these issues. Following are summaries of general guidelines taken from these two sources:
- Information on how sample size was determined is always important. Indicate the information (e.g., effect sizes) used in the power analysis to estimate sample size. When the study is analyzed, confidence intervals are best used to describe the findings.
- Always report any complications that have occurred in the research, including missing data, attrition, and nonresponse, as well as how these problems were handled in data analysis. Before you compute any statistics, look at your data. Always screen your data (this is not tampering with data) to be sure the measurements make sense.
- Select minimally sufficient analyses; using complicated methods of quantitative analyses may appropriately fit data and lead to useful conclusions, but many designs fit basic and simpler techniques. When they do, these should be the statistics of choice. Your job is not to impress your reader with your statistical knowledge and expertise but to analyze the research appropriately and present it so that a reasonably well-informed person can understand it.
- Report actual p values; confidence intervals are even better. Always report an estimate of the magnitude of the effect. If the measurement units (e.g., maximal oxygen consumption) have real meaning, then reporting them in an unstandardized way such as mean difference is useful. Otherwise, standardized reporting such as effect size or r2 is useful. In addition, placing these findings in practical and theoretical context adds much to the report.
- Control multiple comparisons through techniques such as Bonferroni.
- Always report variability using the standard deviation. Standard error characterizes the uncertainty associated with a population and is most useful in determining confidence intervals.
- Report your data at the level (e.g., how many decimal places) that is appropriate for scientific relevance.
Reporting Qualitative Data
Most of our general suggestions for preparing the results section hold for reporting qualitative data. In a qualitative study, you will report the results from the data analysis as themes or conclusions. Each of these themes or subthemes must be supported with data—from interviews, observations, or material collection. Present data to make the case that the results come from multiple sources and are clearly supported. For example, if you said that triangulation among student interviews, teacher interviews, and observations was used during data analysis, all three types of data should be included. Writing a qualitative results section may require multiple revisions so that the results seem plausible and are clear to the committee.
What to Include in the Discussion Section
Although the results are the most important part of the research report, the discussion is the most difficult to write. There are no cute tricks or clear-cut ways to organize the discussion, but the following rules define what to include:
- Discuss your results—not what you wish they were, but what they are.
- Relate your results back to the introduction, previous literature, and hypotheses.
- Explain how your results fit within theory.
- Interpret your findings.
- Recommend or suggest applications of your findings.
- Discuss any significant limitations of the methods and outcomes.
- Summarize and state your conclusions with appropriate supporting evidence.
Your discussion should point out both where data support or fail to support the hypotheses and important findings. But do not confuse significance with meaningfulness in your discussion. In fact, be especially careful to point out where they may not coincide. In particular, the discussion should point out factual relationships between variables and situations, thus leading to a presentation of the significance of the research. Of course, this is an essential place not to confuse cause and effect with correlation. For example, do not say that a characteristic had an effect or influence on a variable when you mean that they were related to each other.
The discussion should end on a positive note, possibly a summarizing statement of the most important finding and its meaning. Never end your discussion with a variation of the old standby of graduate students: More research is needed. Who would have thought otherwise?
The discussion should also point out any methodological problems that occurred in the research. But using a methodological cop-out to explain the results is unacceptable. If you did not find predicted outcomes and you resort to methodological failure as an explanation, you did not do sufficient pilot work.
Graduate students sometimes want their results to sound wonderful and to solve all the problems of the world. Thus, in their discussions, they often make claims well beyond what their data indicate. Your major professor and committee are likely to know a lot about your topic and therefore are unlikely to be fooled by these claims. They can see the data and read the results. They know what you have found and the claims that can be made. A much better strategy is to make your points effectively in your discussion and not try to generalize these points into grandiose ideas that solve humanity’s major problems. Write so that your limited contribution to knowledge is highlighted. If you make broader claims, knowledgeable readers are likely to discount the importance of your legitimate findings. Your discussion should not sound like the Calvin and Hobbes cartoon (by Bill Watterson) in which Calvin said, “I used to hate writing assignments, but now I enjoy them. I realized that the purpose of writing is to inflate weak ideas, obscure poor reasoning, and inhibit clarity.”
Another point about writing your discussion is to write so that reasonably informed and intelligent people can understand what you have found. We strongly recommend having others proofread your work and identify any passages with which they had difficulty. Do not use a thesaurus to replace your normal vocabulary with multisyllabic words and complex sentences. Your writing should not look like the examples in the In Other Words sidebar. (By translating, you can probably recognize these sentences as some well-known sayings.)
Your discussion can generally be guided by the following questions taken from the Publication Manual of the American Psychological Association (American Psychological Association, 2020):
- What have I contributed here?
- How has my study helped to resolve the original problem?
- What conclusions and theoretical implications can I draw from my study?
Writing the Discussion
Problem statement: Why did the chicken cross the road?
Method: One chicken observed by several individuals.
Results: Said chicken crossed the road.
Discussion: Following are the explanations given for the chicken crossing the road.
Dr. Seuss—Did the chicken cross the road? Did he cross it with a toad? Yes, the chicken crossed the road. But why it crossed I’ve not been told.
Sigmund Freud—I dream of a world where chickens can cross the road without having their motives questioned.
Captain Kirk—To boldly go where no chicken has gone before.
Colonel Sanders—Did I miss one?
Graduate student—Is that regular or extra crispy?