Methods, Descriptive Statistics, and the Texas Sharpshooter
If there’s one thing that my PhD studies taught me as a 40-year-old student, it’s that methods matter. In fact, the methods section of a research article is the most important. The methods section describes what the researchers did, how they collected the information, and the statistical analysis used to provide the inference(s). As Ben Goldacre states in his book Bad Science “Always read the methods and results section of a trial to decide what the findings are, because the discussion and conclusion pages at the end are like the comment pages in a newspaper.” We as spots performance professionals rarely, if ever, perform experimental research in the academic sense. Typically, experimental research encompasses randomization, tight internal and external validity, laboratory set up, and the use of inferential statistics.
Randomization: Random assignments of group and treatment to avoid sampling bias.
Internal Validity: Establishing internal validity involves assessing data collection procedures, reliability and validity of the data, experimental design, and the setting of the experiment.
External Validity: External validity relates to the ability to generalize the results to a larger population. To assess external validity, evaluate the similarities between the lab environment and the real world such as:
People
Places
Times
Order
Materials
Raters/Coaches
Instructions
Conditions
Methods
Inferential Statistics: This is hypothesis testing 101 where we use a representative sample to infer our results to a larger population. Sample → Population. (ANOVA, T-Tests, F-Tests etc.)
In the lab environment the same people, the same places, the same time of day, same ordering, materials, raters, etc. are used to avoid bias, and confounding. The reality is, that is extremely difficult for most coaches who have small staffs, minimal time with their athletes, logistical issues and budgets that may look like my hairline (I’m bald).
Real World vs. Academia
We as sport performance coaches are observational researchers. We observe the athletes in their most natural setting (weight room, ice, field, court). Certainly, our goals are to have high internal validity (tools with reliable and valid measure), but our research design in much different. We don’t have a control group. In Sports Science our sample is our team, and our team is our sample. We use descriptive statistics to weave our narratives while informing players and coaches. We are Texas Sharpshooters.
Descriptive Statistics: A means of describing a data set (in this case, the team). These include:
Central tendency (mean, median, mode)
Measures of dispersion (variance, standard deviation)
Measures of correlation (Pearson’s correlation coefficient)
The Texas sharpshooter: Sports science professionals are essentially Texas sharpshooters. As the story goes, a Texan shoots the side of a barn at random multiple times, looks at the trends and patterns of bullet holes, and then draws the bullseye. This is the equivalent in structured experimental research of HARKing (hypothesizing after the results are known). It’s a big no, no!!! However, in the landscape of applied intervention in sport, it’s how we operate in a daily basis, except for outliers (which are very important). It’s not that we’re shooting at random, it’s the fact that we’re drawing the bullseye after the fact.
The Texas Sharpshooter
Traditional Experimental Research: Hypothesis → Data Collection → Results
Sports Science: Collect Data → Assess Trends → Hypothesize
Do Coaches need to Understand statistics?
Yes and no, and it depends. I believe having a basic understanding of descriptive statistics is very important for coaches to aquire. It allows one to communicate more effectively and perhaps more importantly it enhances the ability for coaches to display the data more effectively (i.e., choosing the correct plot/graph/display). A picture is worth a thousand words. Regarding inferential statistics, if the goal is to be able to read, understand and digest research more efficiently a basic understanding of how hypothesis testing works can work wonders. Published is not synonymous with proven. A base level understanding of inferential statistics can work wonders for building a more robust skeptical thinking filter. Methods matter, and understanding their limitations is a powerful tool in the critical thinkers toolbox.
“Science may be described as the art of systematic over-simplification.” -Karl Popper