Setting Benchmarks, Testing using Force Plates in the Applied Setting
I recently viewed a Tweet from a friend asking the question: “Who would like to attend a RoundTable virtual discussion to refine a testing protocol using force plates (SJ and CMJ)?” It’s a great question, and it certainly got me thinking. What once was a luxury tool many simply could not afford, are now in more and more facilities (including private facilities like mine) across North America. We started to incorporate force plate testing two and a half years ago with our hockey playing population. The first question I asked myself before procuring the technology was why? Here are my why’s:
Validity (i.e. accuracy): We used Just Jump mats prior and our results were massively variable due to the nature of how jump height was calculated (flight time vs impulse momentum)
Efficiency in process: plates enable a “plug and play” mentality and enable us to fuse “testing” with training
Safety in application: We also strength test with plates using the IMTP. This allows us to test without placing large loads on the spine
Prior to acquiring plates, I spoke to individuals in the industry much smarter and more experienced than I. It was intimidating, and still is as there are over 70+ metrics to choose from in the Hawkin Dynamics System to assess the CMJ. OVER 70+ metrics for one jump. You could spend a whole day looking at one jump. I decided to break my metrics into categories and focus on a few I thought were important. Here is my current list for the CMJ. Side note: we do not currently test the SJ.
Performance: Jump Height
Health: Propulsive Impulse Index
Strategy: Time to Takeoff
Why these you ask? A combination of research and practical application (trial and error). The article I recommend reading for practitioners is: A Framework to Guide Practitioners for Selecting Metrics During the Countermovement and Drop Jump Tests. I really liked the theoretical considerations (biological basis, feasibility, and quality of the data) as it enabled me to whittle down certain metrics (the elimination of mRSI, while focusing on others) based on sound reasoning.
In my opinion, the practical application/trial and error is the tough stuff. It takes time. It takes effort. It takes collection of longitudinal data with YOUR population over time. It takes analyzation and critical thinking. I recently dug into my longitudinal data to answer the question of asymmetry in our asymptomatic hockey playing population. Here is the blog post...and a few thoughts:
1) Do NOT underestimate first principal knowledge: Physics, programing, phycology, physiology, biomechanics. THIS IS THE GLUE that creates context. Major Aha moment for me in 2023 was an interview I had with Dr. Mal McHugh titled The Case Against Adductor Squeeze Testing. I ignored my first principal knowledge and am thankful for his reminder.
2) Look for pre-existing literature and answer the following questions:
a. Does the sample represent my unique population?
b. Is it a reliable metric? Reliability is the degree to which the measure is free from measurement error. I look at COV (coefficient of variation). The COV is the variability relative to the mean on repeated tests. A smaller COV is better (<10% is best).
3) Test your population over time
4) Create Box and Whisker Plots (Interquartile range, median, mean, whiskers, outliers).
5) Plot the curve for YOUR athletes
Where does your data fit. Is the metric reliable for your population. This is a timely endeavor, but it will humble you and teach you the value of critical thinking, setting benchmarks and answering the age-old question: “what’s good” question for your athletes.
Refinement? Best practices? Best metric? Unfortunately, I don’t have an answer for you, just a few ideas in the quest for those answers. There are some amazing people in this space. As new information emerges, so too shall my opinion. The important part, however, is always asking why. Always digging deeper while never letting go of your first principal knowledge.