Model-Informed Coaching: Investigating the Drop and Pop Deadlift

The challenge is bringing the concept of model-informed coaching to reality. How do we draw from the three pillars of science, logic, and practice without being stuck in indecision? There are many ways to do this, and no one has figured it out perfectly, but the core ideas came together for me in one lucky case study this year.

Model-Informed Coaching

By: CJ Gotcher, Barbell Academy Director

Mentally, it’s Easy to Take it Easy

Sustained and meaningful progress doesn’t come easily; we know that from getting under the bar and doing the work.

But when it comes to growing as coaches, we forget this rule, settling for comfortable answers that are obvious and simple but not necessarily right or useful. It’s the mental version of pounding out endless sets of bicep curls and skipping leg day, but we get away with it because it’s not obvious when we’re skipping out on brain training.

Even worse, we can put in a lot of hard effort and completely dodge the work that really matters.

To get a picture of what this looks like, imagine three types of coaches:

Peer-Review or Bust: These coaches spend hours poring through textbooks and academic literature. They subscribe to multiple research reviews and stay on top of what’s trending in exercise science. They were skeptical of BCAAs before it went mainstream.

They focus on abstracts, reviews, and expert opinion, not methods. After all, these are peer-reviewed studies conducted by experts: who are they to be armchair scientists?

Where the data is lacking, they flood the conversation with citations and details, either not realizing the evidence isn’t there or hoping you won’t notice. And when a study’s findings don’t make sense or they don’t match results on the platform, this coach sticks with what’s on the written page. After all, they’ve done the work. They’re evidence-based.

The Philosophers: These coaches spend hours thinking about the models around training. They read widely and blend a range of ideas—from evolution to economics to the load/deformation curve of steel—to the coaching problem.

They focus on underlying principles and historical case studies that match the model. After all, [insert model here] is universally proven.

When their model fails to explain an event, either in a peer-reviewed paper or from a practicing coach, they reject it as an exception or a failure to follow the model. The model is the ground truth, so data to the contrary must be noise or human error. After all, they’ve done the work. They’re logical.

Deep in the Trenches: These coaches don’t have the time to stay up on the literature or think abstract thoughts about training—they’re too busy getting results. They keep a busy client load and constantly adjust what they do based on their intuition.

They focus on the training logs of the athletes they’ve worked with and the feedback of their clients right now. After all, if it ain’t broke, don’t fix it, right?

When presented with a new method that may work better or evidence that their methods are actually ineffective, they reject it immediately. After all, they’ve done the work. They’re in the trenches.

Model-Informed Coaching: Hard, Effective

Pillars of Model Informed CoachingEach of the coaches above relies too heavily on only one of the three pillars of coaching knowledge—science, logic, and personal experience. They each have good points, can defend their choices, and may be working very hard to serve their clients, but in sticking to the domain they’re comfortable with, they miss important insights and dodge the real hard work of challenging their assumptions.

In reality, few coaches are as extreme as these stereotypes. Still, you may see a little of yourself or coaches you know in them. We tend to gravitate strongly toward one or a pair of these sources at different times in our coaching journey, oftentimes covering for our weaknesses by ignoring them or rationalizing them away as less important.

Each of these pillars brings its own insights and limitations, and we would benefit from using them all. The challenge is bringing the concept of model-informed coaching to reality. How do we draw from all three sources of information without being stuck in indecision?

There are many ways to do this, and no one has figured it out perfectly, but the core ideas came together for me in one lucky case study this year.

How It Started

The first principle to being a model-informed coach is to check yourself, and you can do this at whichever point you start:

Assess your observations and the studies you read critically. Test new ideas in your practice, whether they come from your brain or a published paper. Read the published literature to point out lapses in logic and bias in what you see in the gym.

For one example, I first learned how to deadlift without a clear model. I looked up the Workout of the Day, watched the demo video, and tried to do that. It was atrocious, but I made it out in one piece.

Eventually, I learned. I got coaching, watched videos, read books, and developed a mental model of the lift that made sense to me:

The bar had to be precisely over midfoot and travel in a straight line to eliminate any unproductive work. The back had to be rigid, and the bar had to touch the shins, so this meant there was only one “right” position off the floor. My job as a lifter was to get in that optimal position and hold it throughout the lift.

Since my lifters and I got stronger, I saw no reason to challenge the model.

I noticed some incredibly strong lifters who set up differently, starting with their hips high, often with the bar slightly forward of the midfoot, and dropping their hips to meet the bar and lift it in one smooth motion. I thought of them as exceptions. Their approach couldn’t be better because the extra movement had no place in my model, but as long as they reached the same position when the bar left the floor, it didn’t matter. Besides, I told myself, it would be harder to teach novices.

Or so I thought until I saw a video of Swede Burns teaching this setup with good results to a relatively new lifter who had serious form issues.

I decided to play with it myself. It took me two weeks (four sessions) to feel comfortable and another month or two to really dial it in, but it felt like I moved the bar faster at lighter weights, the heavier pulls felt smoother, and I hit a meet PR a few months later using this “drop-and-pop” (DnP).

At this point, I was wondering whether this was a real phenomenon or just in my head. I ordered the REPONE Strength linear transducer and measured a big difference in bar speed.

Model Informed Velocity Chart

Caption: over 20 sessions of deadlift data at various bar weights, trying to move the bar as quickly as possible, analyzed by Greg Bultman. The Y-Axis shows the difference in speed between the two setups in meters per second, and the X-Axis shows bar weight. At the lowest bar weights, the DnP deadlifts were, on average,.21 m/s faster. At the highest bar weights, the average difference narrowed to.05 m/s, and because of fewer samples, the standard deviation widened

I looked into the literature on elastic stretch and the winding filament theory, and it seemed plausible that stored energy could cause a “pre-stretch” to lead to a stronger deadlift.

I asked a few coaches to experiment with it and assigned it to a few of my clients. All involved liked it and said it felt faster, though not everyone chose to keep using it, and they provided useful feedback about their experience.

The Challenge

I appeared to have a method that produced a faster bar speed and stronger maximal efforts. I applied the idea of model-informed coaching by challenging my assumptions at each stage, checking the biases from one source by another:

I had—

  • a plausible mechanism.
  • a sampling of high-level lifters doing it this way.
  • scientific data that matched the story.
  • 10+ people’s subjective feeling of improvement.
  • a teaching progression that worked to help remote lifters execute the technique.

Most IG workouts and training e-books are based on less.

But this is where the second principle of model-informed coaching comes into play: The process of checking is never over.

My audience was limited. All my subjects were experienced lifters with solid technique. But what if the speed difference was unique to experienced lifters? How much did placebo play a part? I was the only one using a quantifiable measure (bar speed)—what if the drop-and-pop only feels faster for most people?

The only way to know—and in the process, learn more and become a better coach—would be to keep checking, this time under more objective conditions.

In this case, I hosted a deadlift-only camp teaching both setup techniques along with other drills, and at the end, each attendee did a double at approximately 50% and 80% of their one-rep max (1RM) weight. The full write-up of the methods and results is included here.

This pilot test did not support the idea that a dynamic setup creates a stronger pull or faster bar speed. Although four of the participants felt they were much stronger in the dynamic start, only two saw a meaningful difference, and the average difference was effectively nil.

There are some possible reasons for these findings, which I outline in the discussion section of the report. So when a model-informed coach runs into evidence that counters their assumptions, do they have to “give up?” Not necessarily.

Next Steps

The fact that the checking never ends is, in some ways, a blessing because it means you can always find new ways to test the hypothesis, but it’s also a challenge. Because the search is potentially endless, you have to decide when you’ve checked an idea well enough to make the right decisions.

You can do that in a few ways:

  • Make decisions at the appropriate level: My personal experience is that the new setup is far better, and it certainly doesn’t hurt, so I’m going to keep using it. There’s enough evidence that I might teach it to experienced lifters as something to experiment with. But I’m not yet going to teach all my lifters this setup or announce it as “the right way to deadlift.” The wider the application, the more confident I’d need to be.
  • Consider whether the checking itself brings you value: Over the last 10 months of collecting data and experimenting, I didn’t stop learning. Along the way, I challenged and updated my model of the deadlift, got great feedback on my coaching, hosted a camp, created content (social media posts, teaching progression videos, this article), scored a PR, and overall enjoyed myself. Even if it turns out there’s nothing here, I can keep searching until I’m convinced or until I stop learning from the process.
  • Decide how deep is “enough”: if what you’re testing is core to your belief system or your training methods, keep checking until you’re certain in your knowledge. You may still be wrong—very smart people are often very wrong despite their best efforts—but you can’t be confident as a coach until you’ve done the work. On the other hand, if it’s a minor issue, good enough may be just right.

For me, I plan to keep checking and learning, refining the teaching progression, and continuing to collect feedback. (Keep an eye out for opportunities to join in on the next review.)

Stay Curious. Stay Humble.

The third and final principle of model-informed coaching is to stay curious and humble.

None of us are perfect. Most of the time, I gravitate toward the logic/science end of the spectrum and have to build in constant reminders to check my ideas with what’s happening in real time.

Model-informed coaching requires some level of humility by design. Because the checking goes on forever, you always have to be a little humble, even in things you hold certain and sacred. After all, the next check might discover something you missed.

Staying model-informed isn’t an achievement or a checklist. There’s no certification or degree that confirms you’ve “got it,” and it applies well beyond the gym. It’s an ongoing practice for coming to the highest-quality ideas without fooling yourself and others.

It’s hard. It’s a lot of work. But like quality training, it’s worth it.


Many thanks to Greg Bultman, who conducted the blinded analysis of the data from the trial, and to Dr. Jonathan Sullivan, for his review of the pilot proposal and improvements to the design.

SPECIAL OFFERS

OTHER NEWS

 

twitter2 twitter2 instagram2 facebook2

 

©2024 Barbell Logic | All rights reserved. | Privacy Policy | Terms & Conditions | Powered by Tension Group

Log in with your credentials

Forgot your details?