Science and the Application to Sport
If only it were as easy as writing it on a napkin. Well, as I’ve aged, I’ve challenged myself to do just that. What exactly is science? That’s a question I feel every sport “scientist” should be able to answer. My napkin, and its contents reside in the mind of philosopher and scientist Karl Popper.
If it’s that easy, what are the challenges of applying quality science to sport? Here are my thoughts.
1. Start with a Problem
That’s right. As a coach, start with a genuine problem. Perhaps your problem is defined by the sport coach or the athletes you serve but define a problem. The origin of the problem matters! Tech companies and academia are not the best sources of fishing out initial problems (IMO), it’s your team, your athletes, and your coaches.
Regarding the problem: ask yourself two things:
How does it help the player?
Is it interesting or important?
2. Temporary Theory
Use specific language when creating your theory. Definitions matter. What kind of strength? What kind of power? Cleary state your assumptions. Science and research are filled with assumptions. Statistics and probability, filled with assumptions. Models are built on assumptions. In fact, models are oversimplifications of reality. They are built from the assumptions of “experts” and are used to make predictions in the real world. Clearly state your assumptions.
Don’t be fooled with fancy jargon. Understand first principles (physics, programming, phycology, physiology and biomechanics). I rely on these as my BS meter. I learn every day, and we recently reminded of my ignorance on the HPH Podcast with my friend Dr. Mal McHugh: The Case Against Adductor Squeeze Testing. Your first principles are your Swiss Army knife of knowledge. Rely on them.
3. Error Elimination
Perhaps the most important! Are we collecting noise? “I’m a numbers guy.” “You can’t manage, what you don’t measure.” “We will measure and manage metric “X” and make the appropriate changes.” “We have a new force plate, Catapult system, Moxy Monitor, HRV system and we can’t wait to measure and make changes to quality “X.””
How many times have we heard this as a 21st century sports scientist? Perhaps the best place start (aside from having an initial problem to solve), is to perform your own experiment. First order of analysis: is the metric reliable?
Reliability: the degree to which the measure is free from measurement error
Y (observed score) = h (true score) + e (error)
“We think the error is contained within the device. It is not. The error is the process that you collected the data from.” -Scot Morrison
Here’s a simple way to test it.
Choose your test. Focus on a metric(s)
Create a standardized warm up/procedure for your team (error comes from a number of sources: rater, the individual player, ordering effects, actual test itself, time of day, etc.)
Test
Wait a week
Perform exact same standardized warm up/procedure
Test
Download the Hopkins spreadsheet, and click the “Reliability” tab
Plug and play your data points (two separate dates)
View Column “L91”
Side note: I am not a statistician. There are plenty of good resources within performance community. (Patrick Ward and Scot Morrison are two of them.) A variety of statistics can be used to calculate reliability as the best measures to assess this quality are debatable [1, 2]. Relative reliability, reflecting how the individual maintained his rank within the sample over time, can be evaluated using the interclass correlation coefficient ICC (3,1) [1]. Values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.9 may be interpreted as poor, moderate, good, and excellent reliability respectively [3]
Absolute reliability, indicative of how repeated measures vary for individuals over time [1], can be assessed using the standard error of the measure(SEM). The SEM is calculated by multiplying the SD of scores from all subjects by the square root of 1.0 minus the ICC [4].
Here is the formula to calculate via the spreadsheet: =D51*SQRT(1-L91)
Finally, intra-day reliability for each team member can be quantified as a coefficient of variation (CV) [s/mean*100] using the within-trial variability of the measures from the three trials. The three trials of each test are averaged for an individual mean value. The level of acceptance for reliability is debatable as a CV 15% [5] has been used in previous research [6].
Yes, this is extra work, but isn’t that what sports science is about? In addition, it enables us to be better stewards of our measures and prevents us from managing noise (I have been guilty of this
These are a few challenges that I face in the private sector re: the application of science to sport. What are yours?
1. Atkinson, G. and A.M. Nevill, Statistical methods for assessing measurement error (reliability) in variables relevant to sports medicine. Sports medicine, 1998. 26(4): p. 217-238.
2. Hopkins, W.G., J.A. Hawley, and L.M. Burke, Design and analysis of research on sport performance enhancement. Medicine & Science in Sports & Exercise, 1999. 31(3): p. 472-485.
3. Koo, T.K. and M.Y. Li, A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of chiropractic medicine, 2016. 15(2): p. 155-163.
4. Weir, J.P., Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. J Strength Cond Res, 2005. 19(1): p. 231-240.
5. Stokes, M., Reliability and repeatability of methods for measuring muscle in physiotherapy. Physiotherapy Practice, 1985. 1(2): p. 71-76.
6. Meylan, C.M., et al., Temporal and kinetic analysis of unilateral jumping in the vertical, horizontal, and lateral directions. Journal of sports sciences, 2010. 28(5): p. 545-554.