Anthony Donskov Anthony Donskov

Brandolini’s Law, The Academic Drip, and First Principles

Bullshit!  Humbug!  There’s a lot of it floating around these days in the performance field.  I strongly recommend reading the short book titled “On Bullshit” by Harry G. Frankfurt.  In it, he describes the ideas first coined by Max Black in an article titled “The Prevalence of Humbug.”  Humbug, or bullshit, is deliberate misrepresentation exposed in the cloak of pretentious language.  Frankfurt states:


“Since bullshit need not be false, it differs from lies in its misrepresentational intent.  The bullshitter may not deceive us, or even intend to do so, either about the facts, or about what he takes the facts to be.  What he does necessarily attempt to deceive us about is his enterprise.  His only indispensably distinctive characteristic is that in a certain way he misrepresents what he’s up to.” 


From my 20 years of experience in the performance field this is happening in areas of research such as long-term athlete development, teaching pedagogy, and in emerging technologies.  The time it takes to refute humbug, far exceeds the amount of time it takes to produce it.  This is Brandolini’s Law, or the bullshit asymmetry principle.  My strong recommendation to combat this is a foundational understanding of first principles!  What are they?  Programming (technical/tactical), physiology, physics, phycology, and biomechanics.  Have a foundational understanding of them all.  Be a Swiss army knife. 


An old paraphrase from Charlie Francis comes to mind:  “A sports scientist’s job is to tell me....why what I’m doing works (or not).”  The problems start with the coach, not the researcher, not the tech company! Why solve problems that don’t exist in the first place?

Science starts and ends with problems. Coaches need to start the process, not academia and technology

When you do read the research, don’t be overcome with pretentious verbiage, unpack it with common sense.  No one is perfect.  We all get fooled.  It happens all the time! 

“In 1996 physicist Alan Sokal published an essay in Social Text--an influential academic journal of cultural studies--touting the deep similarities between quantum gravitational theory and postmodern philosophy. 

Soon thereafter, the essay was revealed as a brilliant parody, a catalog of nonsense written in the cutting-edge but impenetrable lingo of postmodern theorists. The event sparked a furious debate in academic circles and made the headlines of newspapers in the U.S. and abroad.”

Book:  Fashionable Nonsense

This too is happening right now in the world of sports performance. 

Read More
Anthony Donskov Anthony Donskov

Setting Benchmarks, Testing using Force Plates in the Applied Setting

I recently viewed a Tweet from a friend asking the question: “Who would like to attend a RoundTable virtual discussion to refine a testing protocol using force plates (SJ and CMJ)?” It’s a great question, and it certainly got me thinking.  What once was a luxury tool many simply could not afford, are now in more and more facilities (including private facilities like mine) across North America.  We started to incorporate force plate testing two and a half years ago with our hockey playing population.  The first question I asked myself before procuring the technology was why?  Here are my why’s:

 

  • Validity (i.e. accuracy):  We used Just Jump mats prior and our results were massively variable due to the nature of how jump height was calculated (flight time vs impulse momentum)

  • Efficiency in process:  plates enable a “plug and play” mentality and enable us to fuse “testing” with training

  • Safety in application:  We also strength test with plates using the IMTP.  This allows us to test without placing large loads on the spine

 

Prior to acquiring plates, I spoke to individuals in the industry much smarter and more experienced than I.  It was intimidating, and still is as there are over 70+ metrics to choose from in the Hawkin Dynamics System to assess the CMJ.  OVER 70+ metrics for one jump.  You could spend a whole day looking at one jump.  I decided to break my metrics into categories and focus on a few I thought were important.  Here is my current list for the CMJ.  Side note:  we do not currently test the SJ. 

 

  • Performance:  Jump Height

  • Health:  Propulsive Impulse Index

  • Strategy:  Time to Takeoff

Why these you ask?  A combination of research and practical application (trial and error).  The article I recommend reading for practitioners is: A Framework to Guide Practitioners for Selecting Metrics During the Countermovement and Drop Jump TestsI really liked the theoretical considerations (biological basis, feasibility, and quality of the data) as it enabled me to whittle down certain metrics (the elimination of mRSI,  while focusing on others) based on sound reasoning.

 

In my opinion, the practical application/trial and error is the tough stuff.  It takes time.  It takes effort.  It takes collection of longitudinal data with YOUR population over time.  It takes analyzation and critical thinking.  I recently dug into my longitudinal data to answer the question of asymmetry in our asymptomatic hockey playing population.  Here is the blog post...and a few thoughts: 

 

1)    Do NOT underestimate first principal knowledge:  Physics, programing, phycology, physiology, biomechanics.  THIS IS THE GLUE that creates context.  Major Aha moment for me in 2023 was an interview I had with Dr. Mal McHugh titled The Case Against Adductor Squeeze Testing.  I ignored my first principal knowledge and am thankful for his reminder.

2)    Look for pre-existing literature and answer the following questions: 

a.     Does the sample represent my unique population?

b.     Is it a reliable metric?  Reliability is the degree to which the measure is free from measurement error.  I look at COV (coefficient of variation). The COV is the variability relative to the mean on repeated tests.  A smaller COV is better (<10% is best). 

3)    Test your population over time

4)    Create Box and Whisker Plots (Interquartile range, median, mean, whiskers, outliers). 

5)    Plot the curve for YOUR athletes

 

Where does your data fit.  Is the metric reliable for your population.  This is a timely endeavor, but it will humble you and teach you the value of critical thinking, setting benchmarks and answering the age-old question: “what’s good” question for your athletes. 

Refinement?  Best practices?  Best metric?  Unfortunately, I don’t have an answer for you, just a few ideas in the quest for those answers.  There are some amazing people in this space.  As new information emerges, so too shall my opinion.  The important part, however, is always asking why.   Always digging deeper while never letting go of your first principal knowledge. 

 

 

Read More
Anthony Donskov Anthony Donskov

Interpretation

It never ceases to amaze me how individuals interpret information. The results of a research study, a model used to make predictions, coaching outcomes and much more. Many times these interpretations can be polar in nature. We as humans are far from perfect. We suffer from personal bias from our own experiences, we cherry pick, and we don’t spend enough time critically thinking and appraising information.

I am a proud skeptic, not a pessimist or nihilist. In fact, far from it. My impetus is to question my own thoughts and the ideas of others before me. My goal is to improve my interpretative skills. I have foud the best way to do this is by questioning assumptions. Models, research, cocahing, and science are filled with assumptions.

I have found, the more I understand the limitations of the assumptions (myself and others) have made, the better the interpretation and explanaiton that I can offer to others.

Read More
Anthony Donskov Anthony Donskov

Wisdom from the Wise

They say wisdom starts with wonder.  Well...these gentlemen have had decades to ponder.  They have succeeded, failed, taught, and continue to evolve daily.  Here is just a glimpse of that wisdom as I recently had a rare podcast interview with four legends in the field. 

The Wisdom of humility with Michael Boyle

 The Wisdom of the generalist with Dan Pfaff

The Wisdom of complex systems with Fergus Connolly

 The Wisdom of making the “big time” where you are with Buddy Morris

 Thank you, gentlemen, for the unforgettable conversation.  Please share these episodes with ALL involved in performance and management. 

 

“Goodfellas” – The Original Godfathers of Sports Performance:  Part I

“Goodfellas” – The Original Godfathers of Sports Performance:  Part II

Read More
Anthony Donskov Anthony Donskov

Mentors & memoirs: Dan Pfaff, Michael Boyle, Buddy Morris, and Fergus Connolly

I recently had a very unique opportunity.  In fact, if someone asked: “If you were able to sit and have dinner with four coaches who’ve had a profound impact on your career, who would they be?”  Well, I lived that experience.  Although we didn’t have dinner and drinks, I was able to get everyone together and boy was it special.  Life, mistakes, mentors, coaching, advice, and lots of laughs.  In fact, I’m excited to say that I recorded it and will be sharing it with all of you on the next episode of the HPH Podcast.  This was truly an amazing chat!  So... who’s on the guest list?  Why are they important in my career?  What did they teach me? 

Dinner and Drinks with my mentors: Dan Pfaff, Michael Boyle, Fergus Connolly and Buddy Morris.

Michael Boyle

The first seat at the table belongs to no other than Michael Boyle.  Mike was my first mentor in strength and conditioning.  I remember vividly flying to Boston for one of his first mentorship programs.  Perhaps the first ever.  They say success leaves clues.  I knew of Mike through his ties with hockey, his classic DVD series Functional Strength Coach, and his website StrengthCoach.com.  I wanted to learn from him.  I remember emailing Mike every week in hopes of having my articles published on his site.  I would wait every Monday in hopes of seeing my name on the page.  I was so excited in anticipation.  Mike was cordial, always returned correspondence, and never made me feel that I was young, inexperienced, and green.  The first time I ever met Mike was at a Perform Better conference.  He invited me to breakfast with Mark Verstegen and was more than willing to chat, answer questions and keep me involved in the conversation.  Mike never made me feel like I didn’t belong in the room.  This is one of the many lessons that I’ve learned from Mike.

 

Fast forward to the current.  Looking back with a smile, I had the opportunity to work for Mike during an Olympic quad cycle (USA Hockey Women’s National Team) and am proud to call him friend and mentor to this day.  Mike taught me many things, both performance and professional.  Here are a few: 

  • Don’t pick the pepper out of the fly shit (One of my favorites).

  • Keep it simple.

  • There’s a reason there’s a box.  Think inside it before being an “out of the box” thinker.

  • Be a lifelong learner.  Don’t apologize for changing things.

  • Treat people the way you want to be treated. 

  • It’s ok to disagree.

  • Vitamin B.E.E. and R is ok in moderation.  Live a little.

  • The goal of the coach is to eliminate the coach.

Thanks Mike.  I know many in the field call you “mentor” but I’m grateful our paths have crossed.  You’ve left an indelible mark on my career.  Love you. 

 

Buddy Morris

I love Buddy Morris.  The first time I heard Buddy speak was on the Robbie Bourke Podcast in 2014.  It was an amazing chat.  In fact, I had to pull the car over to take notes.  It was, and still is a MAJOR impact on my programming and how I view the training cycle.  I then saw Buddy speak at an Elite FTS seminar in 2015 in Columbus, Ohio.  Buddy has had a major impact on my career in terms of programming, work ethic and the way he holds himself as a professional.  He’s candid, passionate, extremely bright, quick witted and has no time for BS.  He is driven to constantly educate himself by stiving to read, and study daily.  Selfishly, I wish I played on a team Buddy coached.  He inspires me a great deal.  Perhaps it’s his coaching style, his approach, the rasp in his voice, or his intimidating looks, but Buddy makes me want to get better!  Be a better coach, a better observer, learner, and leader. 

 

I was given Buddy’s contact number from Coach Dan Pfaff last year, and he immediately responded.  I sent him a copy of my book and Buddy wrote a handwritten thank-you replay.  He has given me his time graciously as I was lucky enough to have him on the podcast last November (most downloaded episode ever).  What does a hockey podcast have to do with football performance.  Nothing, BUT everything.  Coaching is universal.  So is Buddy’s message. Here are a few lessons Buddy has taught me:

  • Simple programs work well for beginners and advanced athletes BUT for very different reasons

  • The most stress/forces placed on the athlete is the sport itself NOT the weight room

  • Sport is like playing the guitar.  You must practice to excel.  This comes at a cost.

  • Beware the enter-trainer

  • Take notes.  Observe.  Take more notes.

  • Work hard.  Nothing is given.  Earn it. 

  • Take advice from people who provide honest, critical feedback.  No echo-chambers.

Thanks Buddy.  I know many in the field call you “mentor” but I’m grateful our paths have crossed.  You’ve left an indelible mark on my career.  Love you.

Dan Pfaff

One of the smartest, most humble human beings I have ever had the privilege to call friend and mentor. Coach Dan Pfaff is a real-world Swiss Army Knife of knowledge.  He is an Encyclopedia.  Coach Pfaff instilled in me the idea of being a generalist, a serial specialist.  I call these first principles.  Physics, programming, phycology, physiology, and biomechanics.  He is a master at them all.  

They say great mentors point you in the right direction without telling you what to see. Well, Coach Pfaff does a tremendous job listening, providing direction, and listening even more.  It’s no surprise to receive numerous emails pertaining to research studies, video links or contact numbers from other coaches.  He is a connector, an educator, and a world class coach and human being.  I had the opportunity to be a part of the Altis Phase III Mentorship a few years back.  Simply amazing.  What an opportunity to listen to decades of experience, coaching, mistakes, life lessons and application in the applied setting.  Coach Pfaff was one of the first individuals in the performance space to reach out to me during a very dark time.  I will never, ever forget that.  Here are a few lessons Coach Pfaff has taught me:

  • Thrive to be a generalist

  • First Principles

  • Longitudinal data trumps published research (keep meticulous records)

  • Watch video

  • Lead with love and humility

  • Brief and Debrief

  • Learn AWAY from your craft.  Most can be directly applied to your craft.

  • Listen

Thanks Coach Pfaff.  I know many in the field call you “mentor” but I’m grateful our paths have crossed.  You’ve left an indelible mark on my career.  Love you.

Fergus Connolly

The final seat at the table is the Irishman himself, Fergus Connolly.  I met Fergus in Columbus, Ohio years back while he was visiting West Side Barbell.  We had a coffee, a deep chat ensued, and ever since Fergus has been a go to for advice about life, performance, and everything in between.  Like Dan, Fergus is a serial specialist.  One of the smartest human beings I know.  He just thinks differently and has an amazing way of articulating his message.  His book Game Changer was years ahead of its time.  Honestly, it changed my view on the importance of everything AWAY from the weight room.  I speak to Fergus twice/month.  Everything from tech, sport, life, and leadership.  He was one of the first individuals who called me during an extremely difficult time in my life.  He made me laugh, listened to me cry, and always had my back.  Love you Ferg.  I will never forget the gesture and our friendship.  You are a leader, critical thinker, and head of the class in most all areas of human performance.  Here are a few lessons that Fergus has taught me:

  • The 4 co-actives

  • Game patterns

  • When and when NOT to use tech

  • Complex systems

  • Pitfalls and life lessons in leadership

  • Critical thinking

  • Brandolini’s Law

  • Sheepdogs

Thanks Ferg.  I know many in the field call you “mentor” but I’m grateful our paths have crossed.  You’ve left an indelible mark on my career.  Love you.

Well...there you have it.  My table.  The table.  What an experience.  Perhaps next time, dinner, drinks and cigars?  Who knows. Can’t wait to share the chat with all of you.  The podcast is set to air 11/20/23. 

Read More
Anthony Donskov Anthony Donskov

Taylorism and sport performance

I’ve written recently about understanding assumptions pertaining to models.  Models are simply the assumptions of scientists and experts used to make predictions.  These assumptions may be either objective, subjective, or a combination of both.  Models are subject to human error. They’re not perfect but should offer superior explanation.  Perhaps the biggest set of assumptions performance professionals should seek to understand are the works of a book first published in 1911 titled “The Principles of Scientific Management” by Frederick Taylor. 

I was first tuned onto his work from a colleague John Kiely in his masterful article Periodization Paradigms of the 21st Century:  Evidence Led or Tradition Driven (My humble opinion, mandatory read for performance coaches).  The article speaks about the idea of Taylorism.  Frederick Taylor coined the term “task management” as a synonym for scientific management.  Taylor observed factory workers in settings such as pig iron, brick layers and shovelers in the Steel industry.  He then timed the execution of each task, observed efficiency in movement and workflow and created a standardized process known as scientific management.  The principles are listed below: 

 

1.)   Break the job into parts.  Develop a “science” for each part, which replaces the old rule-of thumb method.”

2.)   “Scientifically select and train, teach, and develop the worker.

3.)   Cooperate with the workers ensuring work done in accordance with scientific principles.

4.)   Equal division of work/responsibility from management and workmen. 

 

“The development of a science, on the other hand, involves the establishment of many rules, laws, and formulae which replace the judgment of the individual workman, and which can be effectively used only after having been systemically recorded, indexed, etc.” – Frederick Taylor

 

The tentacles of scientific management spread rapidly and were used by Henry Ford in the automotive industry for efficiency and production, and in the 1930’s in the Russian economy with the use of quotas over fixed periods of time.   These were known as 5-year plans. The BIGGEST difference between the two is that one is a fixed system (assembly line), and one is a complex system (economy).  The results of these experiments were drastically different.  Ford revolutionized the auto industry, while famine and starvation occurred in Russia during the early 1930’s. 

 

Taylorism has spread throughout the world and is still used today in many realms.  The End of Average by Todd Rose speaks about consequences of Taylorism today.  What does this matter for performance coaches?  Periodization models, long-term athletic development models, tissue healing models, training residual models ALL incorporate Taylorism.  So, what’s wrong with that you say?  Nothing, if you’re comfortable with the assumptions of the model and can critically rationalize as to each limitation.  Here are the assumptions

 

  • There is one best method - Top-Down Dictates

“One of the important objects of this paper is to convince its readers that every single act of every workman can be reduced to a science.”  -Frederick Taylor

 

  • Linear systems mimic complex systems

“It is true that the laws which result from experiments of this class, owing to the fact that the very complex organism-the human being-is being experimented with, are subject to a larger number of expectations than is the case with laws relating to material things.  And yet, the laws of this kind, which apply to a large majority of men, unquestionably exist, and when clearly defined are of great value as a guide in dealing with men.”  -Frederick Taylor

 

  • Rigid Standardization – Averages are superior

“And the duty of enforcing the adoption of standards and of enforcing this cooperation rests with management alone.”  -Frederick Taylor

 

  • Clouds are Clocks: Complex systems are predictable

“.... he must consequently be trained by a man more intelligent than himself into the habit of working in accordance with the laws of this science before he can be successful.” -Frederick Taylor

 

  • Timelines dictate efficiency

“It is this combination of the initiative of the workmen, coupled with new types of work done by management, that makes scientific management more efficient than the old plan.” -Frederick Taylor

 

We use periodization models.  We believe in long term-athletic development, but they are NOT perfect.  In fact, far from it.  Complex systems are inherently different than Newtonian linear systems and include the following features:

 

  • Numerosity:  Complex systems involve many interactions among their parts.  If A → B, where B may be affected by 100’s of variables.

  • Emergence:  In a complex system, the whole of the system may behave differently than each part in isolation.  Reductionist approaches fall short.  The response (emotional or physical) to one individual may be drastically different than the response of another. 

  • Non-Linear:  Complex systems exhibit nonlinear dependenceTimelines pose difficulty in the face of this dependence.  Dependencies such as resources, program variables, genetics, may or may not “marry up” to pre-determined timelines. 

  • Adaptive Behavior:  Complex systems modify their behavior in different environments.

 

“Complexity science is often contrasted with reductive science, where the latter is based on breaking wholes into parts.”  -James Ladyman

 

What does Frederick Taylor and his book “The Principles of Scientific Management” have to do with performance?  Nothing, but everything.  In fact, I recommend you read it front to back like I did.  In order for us to grow as skeptical thinkers, we need to understand models, their limitations, and inefficiencies and not just accept them as is.  We can also pivot when times of change such as:

  • An athlete coming back from injury

  • Altering programming variables in the weight room

  • Limitations of testing

  • Long-Term athletic development pedagogy

Models are great, but I have found that those who question with curiosity and humility, can improve the very process while enhancing their critical thinking skills.  Taylorism is alive and well today.  Nothing wrong with that, as long as you understand the assumptions.  Know the rules (assumptions), before you break them.

Read More
Anthony Donskov Anthony Donskov

L | R Impulse Index:  Observations from the Private Sector

I make mistakes frequently.  It’s part of the learning process.  Perhaps, one of the biggest mistakes I’ve made in my 20-year coaching career is not keeping meticulous records.  It’s interesting to hear debates on social media consistently backed by published research.  I think that’s a great start (better to be backed by solid research than not), but the biggest form of “research” that coaches should take to heart in the applied setting is longitudinal data from their OWN unique population.  This takes time, repetition, diligence and organization.  What’s good?  What’s reliable?  Those questions aren’t that easy to answer.  In the world of metrics and data, longitudinal information wins the gold medal, published research, second. 

I recently saw a Twitter post from a fellow colleague regarding her use of force plates in the return to play setting. 

This got me thinking and led me to my own investigation (i.e., longitudinal data stream).  We have used force plates for over two years in the private setting and have had the opportunity to test 100’s of hockey players.  We look at a few metrics and break them down into buckets: health, strategy, and performance (thanks to my friend Eric Renaghan).  Our health metric is L|R Braking Impulse index and L|R propulsive Impulse Index.  We would use these numbers as a baseline should an athlete get injured and use them as objective markers in the RTP process.  According to Hawkin Dynamics the definitions of these metrics are as follows:

L|R braking asymmetry index: “The asymmetry between the left and right vertical impulses applied to the system center of mass during the braking phase.”   

 

L|R propulsive impulse index: “The asymmetry between the left and right vertical impulses applied to the system center of mass during the propulsion phase.”     

 

Injury and asymmetry are complex issues in the world of performance.  Injury is multi-factorial with 100’s of confounders at work and asymmetry, many would argue, is a natural human quality, not just from the repetition of playing sport. In fact, therapists such as PRI (Postural Restoration Institute) practitioners would state that the body is designed asymmetricaly based on viscera.  Research is also variable with cutoffs and thresholds.  This can get confusing.  Side to side differences within 15% of the contralateral limb have been recommended for functional tests involving jumping (2) for RTP, while lower asymmetry levels may affect performance in healthy athletes (1). 

So, I decided to put my longitudinal data to the test to answer the following questions:

  • What does asymmetry look like in a asymptomatic population of athletes re: jump testing?  How big is this asymmetry? (I used >15% threshold)

  • Which metric is more reliable? In essence, which metric has less variability?  

Here is the investigation, by the numbers:

  •  N = 160 asymptomatic athletes

  • Ages 14-18 years old

  • 380 jumps were analyzed (note: athletes in this investigation jumped multiple times and an average score was used)

I calculated the mean and standard deviation for each metric while also calculating the normal distribution.  I then graphed this distribution via bell curve and calculated the 1st, 2nd and 3rd standard deviation from the mean. 

L | R Braking Impulse Index

  • 68% of the jumps tested had an asymmetry of between -11 – 10.8

  • 27.2% of the jumps tested had an asymmetry of between -11- -21.8, and 10.8- 21.7 (68 tests were >15%)

  • 5% of the jumps tested had an asymmetry between -21.8 - -32.7, and 21.7 – 32.5. 

 

L | R Propulsive Impulse Index

  • 68% of the jumps tested had an asymmetry of between -6.26 and 7.15

  • 27.2% of the jumps tested had an asymmetry of between -6.26 - –12.97, and 7.15 – 13.86 (13 tests were >15%)

  • 5% of the population of healthy athletes had an asymmetry of between 21.7 – 32.5, and -21.8 - -32.7

Questions Answered (Maybe?): 

What does asymmetry look like in a healthy population of athletes re: jumps?  How big is this asymmetry?

  • 18% of all jumps tested (68 0f 380) had L | R Braking Impulse Index asymmetries >15%.  This may pose problems when looking at this metric solely for RTP purposes.  Is this normal variability? An underlying issue? An accurate measure still may not be a good measure when close to 20% of the tests showed >15% asymmetry in asymptomatic athletes.

  • 3.4% of all jumps tested (13 0f 380) had L | R Propulsive Impulse Index asymmetries >15%. 

 

Which metric is more reliable? In essence, which metric has less variability?

  • Although the contraction profiles are different it appears that L | R Propulsive Impulse Index is less variable, thus a more reliable measure.

Limitations:

  • Relatively small sample size

  • Randomization, learning effect, and testing time were not controlled.  I would argue that in most applied cases, this happens more often than not. 

Closing Remarks:

One metric alone should never rule the RTP space.  It’s important for coaches in the applied setting to understand the metrics within their unique populations.  This takes time.  The other major concern I have is the over reliance on numbers and metrics at the expense of first principal knowledge (physics, programming, phycology, physiology, and biomechanics).  These, I would argue, enable the practitioner context and narrative so the numbers can “speak.”

Looking at the longitudinal data, teasing out metrics that may be less reliable, having a strong coaches eye, and sound understanding of first principles knowledge are critical factors in epistemic growth. Common practice doesn’t always equate to best practice.

 

References

1.         Bishop C, Read P, McCubbine J, and Turner A. Vertical and horizontal asymmetries are related to slower sprinting and jump performance in elite youth female soccer players. J Strength Cond Res 35: 56-63, 2021.

2.         Myer GD, Paterno MV, Ford KR, Quatman CE, and Hewett TE. Rehabilitation after anterior cruciate ligament reconstruction: criteria-based progression through the return-to-sport phase. J Orthop Sports Phys Ther 36: 385-402, 2006.

 

Read More
Anthony Donskov Anthony Donskov

Assumptions

According to the Oxford Dictionary the definition of “assumption” is:

Science and research are filled with assumptions.  Statistics and probability, filled with assumptions.  Models are built on assumptions.  In fact, models are oversimplifications of reality.  They are built from the assumptions of “experts” and are used to make predictions in the real world.  Science, probability, statistics, and modeling are not perfect because there is always error involved.  Measurement error, random error (sampling bias, animate vs inanimate, the environment of the assumption: experimental vs. observational) are all elements of potential error. 

“Great!  Who cares!  That’s the best we have!”  I’m fine with that response when chatting with fellow colleagues, however, my concern is understanding the assumptions involved.  In essence, improving critical thinking skills, and skepticism.  Don’t be so quick to state “the model proves…”, “the model shows,” “the model works,” without understanding the assumptions. We should strive to understand the underpinning presumptions regarding things such as: 

  • Periodization Models

  • Long-Term Athletic Development Models

  • Training Residual Models

  • Movement Models

  • Biomechanical Models

“Science is the belief in the ignorance of experts.”  - Richard Feynman

 

Science evolves by creating better explanations for problems.  In order to do so, understand the assumptions behind the issue at hand. Question them, pick them apart, and strive to create better explanations.  At least, be able to defend them and understand their limitations. 

Read More
Anthony Donskov Anthony Donskov

Unintended Consequences

The law of unintended consequences states that human beings, government, and complex systems, always have effects that are unanticipated or unintended.  A beautiful narrative illustrates this concept called “The Cobra Effect.”

 

“The Cobra Effect refers to the unintended negative consequences of an incentive that was designed to improve society or individual well-being. The term derives from an attempt to eradicate snakes in India, wherein people bred cobras to collect rewards for their capture.” -Psychology Today

 

As sports performance professionals, we too must consider unintended consequences whether incentivized or not.  By performing a simple pre-mortem check list we can serve to better prepare for these potential pitfalls.   

 

Pre-Mortem Check List:

 

  • If A → B, what else may affect B – what are potential confounders

  • If A → B, what happens to abilities C-Z

  • What are my assumptions?

  • How can I minimize adaptational risk, while upsizing gains?


A few simple heuristics we use to off-set these consequences:

  •   1/N Heuristic:  Diversify the stress portfolio.  Train multiple bio-motor abilities within the micro-cycle.  If targeted interventions occur, slightly adjust.  Measure, manage.

  •  Big Picture - Train the ability, don’t chase the metric:  We use force plates, and I have, and continue to make mistakes as I learn how to appropriately use them in the private sector.  We look at 3 variables:

1.     Health:  Propulsive/Braking Impulse Index

2.     Performance:  Jump Height

3.     Strategy:  Time to Take Off

 

If we see a limitation, we view it holistically, not piecemeal.  I was challenged recently by a friend and colleague Mal McHugh who asked: “Why would any one look at the output metrics individually on a force time curve.  It’s like looking at an EKG.  The picture tells the story (paraphrased).” 

 Unintended consequences are part and parcel of operating in complex environments.  Use them to improve your critical thinking skills. 

Read More
Anthony Donskov Anthony Donskov

Random Thoughts

Comment: The Wizard of Oz is an old fable that has nothing to do, but everything to do with the current “measurement” and “technology” craze in sport performance.  The Wizard is believed to run the beautiful Emerald city and is thought to be ever powerful and omnipotent presence, only to find out that he’s just a man common man behind a curtain.

Random thought:  I believe this story parallels many of the current trends I see in sports performance.  Teach the performance coach the technology, not the technologist the sport.  Ground your reasoning with a sound knowledge of first principals game IQ.  Esoteric banter should not be the ground for common discourse.  Tech is great, use it to enrich decision making, but understand its limitations and assumptions.  Behind the curtain, every practitioner should answer the fundamental question: “how does it help the player?”  Interesting is NOT synonymous with important. 

Comment: When you increase the number of winners, you decrease the value of winning

Random thought:  We have a supply and demand issue in sports performance.  It appears that the more we eventually become concerned with lack of pay (minus big revenue sports such as football), the more we resort to improving our resumes via academia.  We graduate approximately 32,740 kinesiology and exercise science undergrads each year in the USA.  That’s NOT including master’s and PhD students.  The new craze in sports science in the PhD.  It increases the depth of the talent pool, but not so much the pay. 

According to Zippia.com, the average strength and condoning coach makes $49,382/year with an average hourly rate of $23.74.  The problem, in my opinion, will only get worse. 

Comment: “Don’t pick the pepper out of the fly shit.”  -Michael Boyle

Random thought:  Do we really think that improving one said metric (for example: mRSI or force at minimal displacement) is really going to affect the scoreboard?  I’m a geek, I must confess, but how deep do we need to dig before completely loosing track of the performance landscape.  I always ask myself:

  • How does it help the player?

  • Will it affect the scoreboard?

  • If I improve one metric, will it negatively affect another? 

  • How much time do I have with the athlete?

Another rationale to also consider:

  • Did the player give maximal effort?

  • Does the player care?

 

We monitor to provide targeted interventions, not face lifts.  We also work with large teams during the hockey season. Our programs involve a 1/N philosophy.  We choose 1-3 metrics for the following:

  • Movement

  • Stress Response

  • Performance

We aim to keep it simple. Measure, monitor, compare amongst the team, make subtle changes. 

Comment: Long-Term Athletic Development:  Is it just theory or is it being put into practice?

Random thought:  I hear many lecturing the benefits of playing multiple sports, training biological windows of adaptation, free play, making sport fun and interactive ect, ect.  In fact, I’ve written about this many times.  I’ve also created a manual outlining its use in the private sector.

Hockey in now a year-round endeavor.  In fact, tryouts for most teams around the US are literally weeks after the hockey season ends.  At certain ages (12-18) some players may grow up to 2-3” and gain 10-15lbs.  This maturation has tangible effects on the ice.  Furthermore, summer tournaments, summer skates and summer “combines” leave little time for off-ice focus and multiple sport playing.  This is permeating into younger and younger age demographics.  So, is long-term athletic development just a great theory, or is it being practiced?  Yogi Berra said it best: “In theory there is no difference between theory and practice.  In practice there is.”  Watch what others are doing, not what their saying.  The proof is in the pudding. 

 

Comment: “What is one intuition or assumption you have about hockey that greater access to data would allow you to test?”

Random thought:  Some problems cannot be solved with more data. Recognizing the limits of the possible is the beginning of wisdom. I have realized that the scoreboard outcome is largely determined by judgment, chance and skill, all of which are quite impossible to measure.

Read More
Anthony Donskov Anthony Donskov

The Invisible Surgeon

Injury is a complicated phenomenon.  In complex environments such as sport, it’s nearly impossible to pin-point the root cause.  The news heading reads: “Athlete “X” pulled his groin and will not return to the ice for up to 2-3 weeks.” How did this occur you ask yourself?  Truth be told, if I’m sitting in a room and someone who has no affiliation with the team has the answer, I’d prefer to change rooms.  Nothing wrong with healthy conjecture, but a dose of epistemic humility goes a long way in off-setting overconfidence.  The paradox is that many times those closest to the ice or field don’t have the answers as thousands of confounders are at play.  Lifestyle, biomechanics, programing, and poor medical intervention may all play a role in injury.  It’s impossible to predict. 

If an injury does occur (pending severity) our goal as the performance professional is to serve as the invisible surgeon.  How can we safely get the player back on the ice without re-injury, or worse yet, surgery.  In the private sector, we focus on:

  • Lifestyle education:  This occurs at the onset of the hockey season and in our coaching briefs – debriefs.  We lecture on nutrition, hydration, and the importance of sleep.  Equipping our athletes with the requisite knowledge enabling and empowering them to make better decisions away from the weight room, and the ice.

  • Programming:  Great invisible surgeons have a solid understanding of:

o   Soft Tissue Healing Timelines:  Inflammation, Repair, Remodel

o   Physics:  Levers, Torques and strain re:  exercise selection

o   Muscle Function

o   MVF (Maximal Voluntary Force Production):  Lowest for concentric contractions and highest for eccentric contractions

 

  • Medical (End stage Rehab)

o   The “gap” between the weight room and the ice

o   Proper On-Ice Progression:  Intensity, rink geometry

If an athlete does sustain a soft tissue injury during the hockey season, we make modifications based off the above criteria.  Injuries are difficult to pin-point and impossible to predict.  There are no certainties in complex environments.  Our goal is to continually iterate our processes and improve the controllable, which in the big scheme of things is a small piece in the performance puzzle. Injuries and surgeries will occur.  Health ends where sport begins.  Those are the cards we are delt.  Having said that, I view great practitioners as invisible surgeons, mining the gap between set back and return. 

 

Read More
Anthony Donskov Anthony Donskov

“What the Research States”

It depends” is narrative we use quite often in performance circles when communicating amongst practitioners.  It’s difficult to speak in black and white when human performance resides in shades of gray. We operate in complex environments.   “It depends” should foster additional communication, not end it.  I was reminded of this in a previous tweet a few years back from my friend Stuart McMillan. 

 I couldn’t agree more!  In fact, for me, “What the research states” is used just as commonly as “It depends”.  “What the research states” ...regarding long term athletic development.  “What the research” states about plyometrics.  “What the research states” about sprinting.  “What the research states” about load monitoring....and the list goes on.  Bottom line, this should NOT end the conversation, in FACT it should be the start of it.  AND the burden of “proof” should reside in the individual making the claim.  Here are a few questions to consider: 

  • Who said it?  How do they know?

  • Why else may it be?

  • Is this an apples-to-apples comparison?

  • What are the assumptions?

You’d be surprised that many claims of “what the research states” is nothing more than skimming the abstract of a journal article.  Nothing more, nothing less.  Conversely, if you are making the claim, be prepared to answer these questions to the best of your ability.  This involves reading the methods and results section of the research article, not the abstract.  “What the research states”: 1) may not be relevant to your population, 2) may not be good research, and 3) may be making assumptions that are relevant in your environment.  Research provides explanation.  Explanation provides theory.  The best theories provide superior explanation. 

Read More
Anthony Donskov Anthony Donskov

The Education Gap:  Becoming a “Pracademic”

I’m a big fan of education.  In fact, I went back to school the better part of four years ago to pursue my PhD in Kinesiology.  I guess you’d call me a passionately curious life-long learner.  I’ve also run a private business for the better part of 18 years.  These experiences, both academic and pragmatic have shaped my current view pertaining to the educational landscape of the strength and conditioning profession.  I’ve spoken about this before with my friend Keir Wenham-Flatt. 

 

Some say there are two worlds.  World #1: academia and world #2: the real world.  Perhaps there is a third, where both academics and pragmatists thrive?  This to me, is the gap!  The world of the “pracademic.”  I believe this education comes from the apprenticeship model of learn by doing, coupled with relevant academic readings attained with a library card or trusted search engine.  Bottom line, from my personal experience, a lot of my undergraduate work was not relevant to becoming a successful coach in the private sector. 

 

“We teach a lot of things in medical school that are completely irrelevant to the practice of medicine.  A lot of things, actually.  And then we do not teach things which are extraordinarily relevant to the practice of medicine.”   - Anupam Bapu Jena (Econ Talk Podcast Interview)

 

What is a Pracademic

 

Pracademic = Apprentice + Relevant Couse Materials

 

Apprenticeship is crucial in the development of great strength and conditioning coaches.  Without it, one is stuck in world #1.  The problem with world #1 is that learning in this world is highly reliant on rote memorization.  In addition, course materials taught in this world may not be overly relevant in the pragmatic setting.  You can become a great race car driver, without knowing the details of EVERY nut and bolt underneath the hood.  In my 18 years of private sector coaching, I have never had a conversation with an athlete about muscle spindles, actin, myosin, or cross bridging.  Is it good to know? Perhaps.  Need to know? I would suggest not.  If you want to become an outstanding coach, become an apprentice.  Seek mentorship.  Mentorship fosters the autodidact approach to knowledge acquisition coupled with real world application in the target environment.  This is learning in both worlds #1, and #2.  This is the first step in bridging the educational gap. 

 

Relevant course materials are the final piece of bridging the gap between world #1, and world #2.  Education is a process that is never complete.  The best coaches provide the best EXPLANATORY theories (yes, theories are educated guesses).  Education in this world is not multiple choice or rote memorization, but explanatory knowledge.  Theorizing.  The best theories provide superior explanation and are tested frequently.  Perhaps this is why I enjoyed my PhD journey so much.  I enjoyed the subject matter, my mentor/supervisor and the apprenticeship model of knowledge acquisition.  My “test” was an oral defense and written defense, not fill in the blank. What are the relevant course materials you ask?  Here are the topics that I believe EVERY coach should have at surface level competency.  I call these first principles. 

 

  • Critical thinking:  How to interpret data – information.  Embracing skepticism 

  • Programming:  Contrast between world #1 and world #2.  Complex systems

  • Physiology

  • Psychology

  • Physics

  • Biomechanics

 

You don’t need a four-year degree to attain baseline understanding of these principals.  A library card, an hour a day reading, and mentorship are the best ways to start this journey.  You can learn via the blocked approach (choose 1 principal to study for said time period), or a random approach (choose multiple principals to study for said time period).  Great, so what books, what articles?  Revert to your mentor.  The best mentors point you in the right direction without telling you what to see.  Bridging the educational gap between world #1, and world #2 is not overly complex, but the work involved in a never-ending pursuit.  In my opinion, there is no undergraduate degree that compares.

Read More
Anthony Donskov Anthony Donskov

The Simplicity Circle

Sometimes life comes full circle.  Like when you finally realize Dad was right all along.  The little things really do matter, and the best program likely isn’t the one that your athletes are currently on!  I hear many coaches say, “when I look back at my old programs, I cringe.”  “I can’t believe I did that.”  “It’s called learning, and change is progress.”  Truth be told, after 20 years of coaching, more and more of my programming hasn’t changed all too drastically in recent years.  I remind myself of the beautiful quote from John Wooden.

 

“There is no progress without change, but not all change is progress.”

 

The Simplicity Circle

Looking back in the rearview, I believe the training journey for the young athlete is VERY similar to the coaching journey for the seasoned coach.  The training journey begins with simplicity and ends with simplicity, somewhere in between we have the tendency to over-complicate things.  As I’ve aged in this profession, both pragmatically and academically, I find myself back to where I began, embracing simplicity. 

The Simplicity Circle: Keep it simple

 Here are many of the mistakes I’ve made as a young coach:

  • Viewing the training process as a microwave and not slow cooker

  • Confusing simple with easy

  • Not fully appreciating the depth of simplicity

  • Failing to understand complex systems

  • Not mastering first principals:  psychology, physiology, programming, physics, biomechanics

  • Over complexifying programs as athletes progress in their training journeys

  • Failing to understand WHY simple programs work for BOTH beginners and ADVANCED athletes AND the different reasoning as to WHY this is the case

  • Failing to understand the game demands and forces experienced in the actual training environment

  • Not keeping “the goal,” “the goal”

  • Thinking “the lab” is the weight room and not the ice

Whether we’re cognizant or not, we’re all guilty of some sort of Dunning-Kruger affect.  Coaching, sport, and life, teach lessons long after the mistakes have been made.  There is a time and place for complexity, and experimentation as both are important.  However, the older I’ve gotten, the more I believe simplicity is synonymous with sophistication as long as its underpinned with a solid understanding of game demands and first principal knowledge.

 

 

Read More
Anthony Donskov Anthony Donskov

Don’t bring a knife to a gun fight

A decade ago, it was the Olympic lifting craze.  A technical model ensued based on Olympic lifting competitors and coaches respectively.  Today, it’s the track and field - sprinting craze.  A technical model also has now ensued based off competitors and coaches alike.  Are they both appropriate tools for team sport athletes such as hockey players?  Perhaps, and perhaps not!  A few of the filters that I consider prior to programming for hockey players:

Filter #1.     Training age:  How long has the athlete trained? 

Filter #2.     Injury Profile:  Has the athlete been hampered with chronic soft tissue strains?

Filter #3.     Time:  How can we keep “the goal, the goal” pending training age? Can I get a similar training effect without overcoaching? 

Filter#4.     Resources:  Do we have access to ice? 

 

The Swiss Army Knife and The Sniper

Young athletes (Swiss Army Knives) need variety and progressive overload. Advanced athletes (Snipers) thrive in refining skills in the target environment.

 The Swiss Army Knife

We follow the same 1/N philosophy for Swiss Army Knives (young athletes) and snipers (advanced athletes).  1/N states that we diversify our stress portfolio during the training week (strength, speed, power).  This enables us to minimize adaptational risk while upsizing gains.  If we make a programming mistake, it’s not as large as putting all our stress eggs in one basket. 

So, olympic lifting vs loaded jumps?  Sprinting vs COD?  For Swiss army knives refer to bullets #2 and #3.  Program accordingly.  Caveat:  remember team sports rely on multiple bio-motor abilities, not just linear sprinting! The Swiss Army knife needs a well-rounded, balanced approach. 

 

The Sniper

The Sniper (advanced athlete) needs a targeted approach!  Refer to bullet #4!  Do you have access to the target environment?  The BEST way to refine a skill is to PRACTICE the skill in the target environment!  PRACTICE is key.  Random, blocked whatever floats your fancy.  The Sniper needs to focus on the goal being the “goal”.  This comes at the expense of other non – specific modalities. Skill refinement is crucial.

 Wall Work - Rims

Puck Protection

To Olympic lift, or not to Olympic lift?  To sprint, or not to sprint?  That’s not the question!  The goal is to understand the difference between Swiss Army Knives and Snipers and the filters that dictate their programming.  Don’t bring a knife to a gun fight.

Read More
Anthony Donskov Anthony Donskov

How Does it Help the Player?

The hockey season is upon us.  Another training camp in the books.  This has been a time of reflection for me as a performance coach as we’ve had the privilege of working with the same organization for 17 years.  Some say there’s a difference between 1 year of experience 17 times, and 17 years of experience.  I would hope that we have used our 17 years of experience to refine, craft and improve our process.  Our process revolves around one simple underlying question:

 

How does it help the player?

 

Why are you doing what you’re doing?  How does it help the player? 

You’re performing test “X”? How does it help the player? 

You’re tracking metric “X”? How does it help the player? 

You’re proving lifestyle education? How does it help the player? 

So, what do we “test”? Why does it matter?  And most important, how does it help the player?  Here are a few things we always consider at DSC.

 

  • Can testing be training and training be testing?

  • What are the strongest off-ice correlates to on-ice speed? Can we measure these?

  • How can we retrieve relevant information without over-taxing the athlete?

  • Are the tools that we use valid and reliable?

  • Can we create a targeted intervention in a team environment?

  • Can we communicate effectively to the player? 

  • Does it drive performance?

Here is our current list of “tests”, the old tests that they now replace, and how it helps the player!

It doesn’t matter the technology, metric, or test the same thought process is at the root: 

How does it help the player

This permeates into our nutrition, hydration and sleep lectures as these decisions are arguably more important than weight room touches.  The goal is to keep the goal the goal!  Stay healthy, stay strong and stay on the ice.  How does it help the player? 

Read More
Anthony Donskov Anthony Donskov

Clouds and Clocks: Newtonian tools in a complex world

“The quest for precision is analogous to the quest for certainty, and both should be abandoned.” – Karl Popper

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

— Frank Herbert

I use technology.  I use it daily to create efficiency in process and objectivity in communicating results.  I do, however, believe there is an inherent assumption when I am using any sort of technology.  The assumption is this: 

 

Technology design is based on a Newtonian-reductionist approach

 

Reductionist?  Newtonian? What do you mean?  The clouds and clocks analogy given by Karl Popper is a beautiful illustration.

A slide taken from the High Performance Hockey Masterclass depicting clouds (World 3) and clocks (World 2).

Clock: Reliable, predictable, orderly, inanimate, Newtonian-Reductionist

Clouds: Irregular, disorderly, complex, random, unpredictable, animate

 

As coaches, we mustn’t confuse clouds with clocks.  Clouds are complex. Complexity implies an intractable relationship between the parts and the whole.  Each component is ignorant of the behavior of the system.

 

At the current moment, there are: 

 

  • 59 metrics available for coaches to assess the vertical jump in the Hawin Dynamics force plate system

  • 13 metrics available to assess internal training load in the FirstBeat system

  • 250 metrics available to assess external training load in the Catapult system

  • 100’s of new technologies entered into the market daily

As a coach, we could spend an entire day assessing one countermovement jump. Here are a few questions I often ponder:

 

Is it important?  Why?  If I change it, will it drive performance?  If I change it, will other aspects of performance drop?  How often should it change by chance alone?  Do I have the time to change it?  Does it affect the scoreboard?  Is it interesting or important?  What are the limitations of the technology?  What are the limitations of humans using the technology? Do I understand the game to provide proper context?  Do the numbers match my gut feelings- heuristics?  Is the technology valid? As my friend Franco Impellizzeri says validity is broad in context, but narrow in application. “The measure can be valid for something, but not valid for something else.” An accurate measure, is not always a good measure.

 

If only the human body and the scoreboard behaved like a clock.  We could take it apart, identify and improve the lacking hardware and bingo, the performance mystery would be solved.  “If only metric “X” could be improved, he’d play professional hockey.”  “I’ve ID the causative performance decrement, and targeting metric “X” will fix it.”  Unfortunately, we deal with complex beings in complex environments.  This is the world of sport. 

 

“Complex problems change when you look at them, when you talk with them, and when you engage with them.” -John Rendon

 

Technology is great.  However, we must make sure to understand the assumptions pertaining to its use.  A purposeful first step in this process, is understanding more about complex systems.  I have recommended three books below to increase and improve these critical faculties. 

 

Clouds are not clocks, and clocks are not clouds.  “Complex problems cannot be solved because any attempt to create a solution changes the nature of the problem.” I use many of the technologies mentioned in this article, but I rely on my first principals’ knowledge (programming, physics, physiology, phycology, biomechanics) and a solid understanding of the environment in which I operate. I also rely on the “subjective” understanding of the metric hamburger. This enables me to appraise how, when, and if I may use existing, or new technologies and/or metrics in the future.  As coaches, we are using Newtonian tools in complex world. The key is understanding this fact.

Drift Into Failure

Cynefin

Risk Savvy

Read More
Anthony Donskov Anthony Donskov

nikita kucherov and The Wisdom of buddy morris

Common practice doesn’t always equate to best practice.  In fact, as I’ve aged, I think best practice many times is contrarian in thought.  I wrote a book about it called The Gain, Go, Grow Manual:  Programming for High Performance Hockey Players.  In the book I argue for the fact that high-performance hockey players making a living in the game, with a high training age, should spend LESS time in the weight room in the off-season and MORE time on the ice.  Our hypothesis is a three day rollover program (this is not the wisdom of Buddy…wait till the end). “Get off the ice!”  “Rest your hockey muscles.”  “Recharge your mental battery.”  “Build athleticism.”  “Get stronger.”  I can hear the echoes amongst fellow collogues and coaches, many of whom I respect a great deal.  Fact is, I respectfully disagree. 

The Developmental Gantt chart. The more one progresses in skill level, the more priorty is spent on the ice at the expense of weight room touches.

 Fast forward to the current moment, as I was recently emailed an article on Nikita Kucherov’s intense summer training.  A few quotes from Kucherov stood out to me:

 “I know some guys around the league who come in, do their work out, go home, and don't get on the ice until late August. To me, that would be the same thing as LeBron James not touching a basketball for two months. It would be like Lionel Messi going months without touching a soccer ball. We play hockey. Why wouldn't we go on the ice?"

"All those little things that happen 25 to 35 times a game, I practice them. I want to be ready to pick any puck off the boards - backhand, forehand.

"I want to have my head up and be able to find open guys as fast as I can. I want to be in control. So, I work on that, and when the season starts with training camp, I don't struggle to make that play or get the puck off the wall so easily.”

Several other quotes from the author:

 

“Together, the two are working on all the little things that helped Kucherov lead the Lightning with 113 points in 2022-23.”  

 

“A lot of Kucherov's summer training is built on consistent repetition. Some days, you can find him corralling hundreds of pucks off a rim along the boards, as he carefully studies the angles and bounces of the puck. Other days, he may be working on the same exact shot, from the same exact spot, for hours. It's not glamorous. It's not always the prettiest. It's not always the most fun.” 

Ok.  He’s an outlier you say.  He’s the exception, not the norm.  Its survivorship bias you infer.  Bottom line, if you want to be good at playing the guitar, you need to practice playing the guitar.  Nobody would tell Slash, or Van Halen to spend the entire summer focusing on finger strength and then go play a live concert in September.  They make a living playing.  Why do we do this as strength coaches?  Here are a few additional things to consider:

Time: 

  • For every hour spent increasing Kucherov’s 1RM in the weight room, he has less time for skill acquisition on the ice.  Target context, target environment.  In addition, no forces experienced on the ice can be emulated in the weight room.  Run to glide forces in skating can reach 200% bodyweight by the third step (on a single leg), and 120% bodyweight by the sixth step.

 

Frequency Considerations:

  • Recovery:  We are assuming Kucherov has a high training age.  In order for him to adapt to a stimulus in the weight room, greater intensities must be used to trip the homeostatic wire.  Greater intensities, equate to less frequency in the weight room as more recovery is needed. 

  • Skill acquisition (recovery):  Skill acquisition on the ice can be used as a recovery day.  “Kucherov said some of those on-ice sessions when he's working on angles with Oates can almost be more taxing from a mental standpoint than a physical one.”  No one would say “great guitar lesson, I threw up.”  Contrary, “great guitar lesson, I improved my dexterity to reach the X chord.”  Read the article! Monotonous practice of “the little things,” routes, game situations, NOT mindless conditioning. 

 

Yes, I believe in the value of structured strength and conditioning programs in the off-season.  However, as both training age and competition levels increase in harmony something has to give.  As my friend Buddy Morris says: 

 

“Training a beginning athlete and an elite athlete are basically the same. It’s very general in nature. The beginning athlete has no sporting form and can’t handle anything very intense. Everything must be general movement/general in nature. The elite athlete, the only way they are going to get better in their primary sporting activity is to continue to do the sporting activity. So now that becomes the greater stressor to the CNS, the bio-motor system, the neuroendocrine system, the neurochemistry. Now training goes back to being vey general in nature because the demand of the sporting activity.” 

 

Common practice doesn’t always equate to best practice. Thanks, Buddy, for summing it up so eloquently.  Your quote, as well as the wisdom of Dan Pfaff have changed my mindset re:  priorities, time lines, and developmental windows.   

 

 

Read More
Anthony Donskov Anthony Donskov

The Case Against Adductor Squeeze Tests

They say as your island of knowledge grows, so too the shoreline of your ignorance.  It seems the older I get the more questions I have, and the fewer answers I’m able to give with great certainty.  Twenty years ago, I entered the profession a young whipper snapper who had all the “right” answers, today I’m less certain than ever.  A mentor of mine is fond of saying there is no shortage of information in this day and age.  In fact, we’re swimming in a sea of it.  However, most is noise.  Your life preservers are to rely on great mentors and a solid foundation of first principal knowledge.  What is first principal knowledge?  First principal knowledge is a solid understanding of physics, biomechanics, physiology, programming, and psychology.  These are the bedrocks of being a neo-generalist, and a serial specialist.  In short, this understanding enables us to be better critical thinkers and coaches. 

 

I recently had the opportunity to interview Dr. Mal McHugh on the High-Performance Hockey Podcast (to be released next Monday).  He had authored a recent journal article with colleagues titled “Adductor Strains in Athletes” in the International Journal of Sports Physical Therapy.  Mal and colleagues had published articles in 2001: “The Association of hip strength and flexibility with the incidence of adductor muscle strains in professional ice hockey players,” and a 2002: “ The effectiveness of a preseason exercise program to prevent adductor muscle strains in professional ice hockey players,” regarding adductor strains in high performance ice hockey. Mal has spent decades researching, and as a pragmatic practitioner testing 1000’s of athletes. A major Ah ha moment occurred to me after reading the text.  It actually reached out and slapped me in the face.  I chuckled to myself as I had failed to use my first principal knowledge in filtering out information.  What information?   Testing the adductor complex using common squeeze tests.  According to Mal, the three key requirements for testing adduction strength are:

 

1.     A comparison can be made between involved and non-involved sides

2.     A comparison can be made between agonists (adductors) and antagonists (abductors)

3.     The unit of measure enables comparison across populations

 

Here’s the slap!  “Squeeze tests where both limbs contract maximally cannot be used to assess asymmetry in adduction strength between limbs.  The laws of physics and neurophysiology invalidate such tests.”  Laws?  Physiology?  What laws?  What physiology?  I had ignored my first principal knowledge.  Mal had educated me to the obvious.

 Newtons 3rd law:  For every action, there is an equal and opposite reaction.  “If one squeezes a dynamometer between the knees in the bent knee adduction squeeze test, or between the feet in the straight leg squeeze test, Newton’s third law dictates that the force on the right side will equal the force on the left side” ….” It follows that squeeze tests have not been shown to be effective at identifying strength deficit between limbs but have been effective at identifying athletes with groin and hip pathology versus healthy athletes.” If the goal is identifying asymmetries such as the adduction – abduction ratios, squeeze tests are not the solution. 

 

Neurophysiology:  The neurophysiological limitation in comparing strength between limbs while performing simultaneous max efforts with both limbs is called the bi-lateral deficit.  “The bilateral deficit phenomenon is characterized by a lower force generated when two limbs perform a maximal effort bilaterally compared with the sum of the forces generated by the two limbs when performing the effort unilaterally. While the bilateral deficit has not been studied specifically for clinical assessment of weakness no studies have validated bilateral testing for identifying unilateral weakness.”

 

Just like that.  Mal had opened my eyes to blind spots I never knew existed.  First principal knowledge I had, but the critical thinking skills I failed to use.  Mal proceeds in the 2023 article to suggest superior ways of measuring adductor strength and asymmetry, but the lessons he taught me moved well beyond physics, physiology, and pragmatic application.  Think!  Critically think!  Understand your first principals. Use them as your filter.  Come back to them often!  Thanks Mal for the valuable lesson. 

Read More
Anthony Donskov Anthony Donskov

Regression Towards the Mean

“Wow, athlete A had a poor year.  It must have been a sophomore slump. He needs to approach things differently in the offseason.” 


“Wow, athlete A had an amazing year.  He must have had a new performance coach. He needs to keep doing what he’s doing.” 

 

“The aggregate of injuries on team X this year is unacceptable.  Over 500-man games lost (MGL).  We need to do a deep dive and hire a consulting company.”

 

“We fixed it.  Our injuries are down to less than 280-man games lost.  These guys know what they’re talking about.” 

 

We hear these narratives quite commonly by pundits in high performance.  Press, coaches, front office staff are constantly seeking answers to complex questions.  After all, their job is to win, and you can’t win without production from top players, and healthy teams.  It would be great to pinpoint the exact cause of each of these attributes, but sport and injury are extremely complex, unpredictable, and random.

 

“History is not what happened, but what survives the shipwrecks of judgment and chance.”  -Maria Popova

 

If we alter the quote above and insert the word “injury”, “production” or “scoreboard outcome” in place of “History”, we can better understand their complexities.  Luck, chance, skill, timing etc., etc. all affect the outcome.  Thousands of confounders at play. 

 

Perhaps a better way to look at these snapshots (i.e., injury and production) is what is known in statistics as regression towards the mean. 

 

“Regression toward the mean simply says that, following an extreme random event, the next random event is likely to be less extreme. In no sense does the future event "compensate for" or "even out" the previous event.” -Wikipedia

 

Using Regression Towards the mean for Injury Probability

Last year Team X was hit hard by injuries.  A total of 550 MGL were accounted for.  Let’s take a look at 16 years of Team X’s longitudinal data. My best prediction based on using the regression towards the mean for next season is approximately 299 MGL next year (4784/16).  Obviously, this may not be the case, but when viewing the aggregate, I believe that number best approximates my hypothesis.  Yes, injures are complex and may be multifactorial, but these numbers can be used to compare league wide team numbers in better evaluating what’s “fair”, “good” or “unacceptable “ in making difficult decisions. One season may be noise, a drastic outlier, chance, or simply a fluke.

 Using Regression Towards the mean for Production

An elite center ice man had a breakout season in 2017-2018 scoring 43 goals in the NHL regular season.  Is this sustainable? 50 next year? The year after? He has played a total of 8.25 seasons. Using regression towards the mean my best guess would be that his goal production would be in the range of 17-18 goals (140/8.25) for the upcoming hockey season. 

Sports, scoring production, and injury are independent, random events.  Extreme numbers may simply be variable fluctuations in the sea of noise while setting voyage on the ships of chance and judgment.  Use regression towards the mean when dealing with random, complex issues in attempts to avoid the gamblers fallacy.

 

“The Gamblers Fallacy occurs when an individual erroneously believes that a certain random event is less likely or more likely to happen based on the outcome of a previous event or series of events.” 

 

 

Read More