The Trade-Off
“We need to follow the evidence.” “What does the research tell us?” “Is it supported by science?” Evidence-based is a big buzz word at the moment in sports science. I am not anti-science, in fact, I love science and the scientific method, however it’s important to understand what scientific research is, and its implications in the applied setting. The best way to explain this is with an analogy.
Picture two worlds. The world of science, and the real world. In science we live in the world of the abstract. We create theories to solve problems, and we test them. In science we attempt to isolate an independent variable while holding all other variables constant. We then look at the impact of the dependent variable (and sometimes for not an overly long period of time). We create an environment that is nearly impossible to create in the real world. (We may also create an abstract world via in vitro experimentation and animal modeling). Even then it’s extremely hard to tease out confounders. Humans are VERY difficult to study. The supplement/dietary research indusrtry is a prime example. The end result is a tradeoff between control and ecological validity (fancy way of saying is it generalizable to the real-world).
Not many coaches in the real world have the ability to hone in on one variable while holding all others constant. Imagine telling an entire sport team to hold nutrition and exercise constant while looking at the effects of a new supplement? Imagine telling an entire sport team to hold all programming and nutrition constant while looking at the effects of a new exercise? Most “research” in the applied field is observational research with an N=1 approach. Observational research is prone to bias, confounding and cannot be used to imply causation. We observe, create theories, test, and iterate accordingly. Having said that, we can learn several valuable lessons from the world of research that can be used on OUR populations, with OUR athletes, in OUR environments in the real world. I call this mining the gap. Here are a few:
Mining the Gap
If the existing published research attempts to solve a problem that uses a similar sample (ex: professional athletes) for a said period of time (longer the better) with appropriate methods
If the existing published research has similar environmental constraints (technology, resources, people etc.) with appropriate methods
Application: creating an environment within OUR unique environment. If we know why “controlling” is important in research, we can do our best to create that in our unique environment. Process is key. Control the controllables.
Reliability: Longitudinal data collection within OUR team. For EVERY observed measure, there is an error. Error in the applied setting is the process in which we collect the data from. If we firm up our data collection process, we have less noise and error.
It’s not a question of one, or the other. It’s understanding the limitations of both and using equal parts art and science, real world and abstract to make better decisions. A constant work in progress.