CrossFit | 190217
Sunday

190217

Workout of the Day

8

Rest Day

Post thoughts to comments.

BentonRace

The Race by Thomas Hart Benton

"The National Strength and Conditioning Association can’t shut down a suit by its insurer seeking to dodge coverage for an underlying false advertising suit brought by CrossFit Inc. just yet, a California federal court ruled Monday. … The NSCA is currently embroiled in a bitter legal battle with CrossFit over claims that it published a since-debunked 2013 study portraying CrossFit’s exercise regimen as unsafe, despite knowing the study’s findings were bogus."

Read the articleCrossFit Rival Can't End Insurer's Escape Bid in False Ad Suit

“If the statement succeeds in its purpose, we will know it because journals will stop using statistical significance to determine whether to accept an article. Instead, journals will be accepting papers based on clear and detailed description of the study design, execution, and analysis, having conclusions that are based on valid statistical interpretations and scientific arguments, and reported transparently and thoroughly enough to be rigorously scrutinized by others.” —Ron Wasserstein, Executive Director, American Statistical Association

Read MoreWe’re Using a Common Statistical Test All Wrong. Statisticians Want to Fix That.

Comments on 190217

16 Comments

Comment thread URL copied!
Matthieu Dubreucq
November 25th, 2019 at 9:21 pm
Commented on: Scientific Method: Statistical Errors

I like that the author brings up some possibilities to replace the p-value golden standard. It is one thing to know what doesn't work and it is an other to find a better way.

Comment URL copied!
Matthieu Dubreucq
November 25th, 2019 at 9:07 pm
Commented on: We’re Using a Common Statistical Test All Wrong. Statisticians Want to Fix That.

This is great information to know. Especially when reading a study we can now take the p-value as a correlate but not a direct gold standard of the value of the study.

Comment URL copied!
Matthieu Dubreucq
November 25th, 2019 at 9:00 pm
Commented on: CrossFit Rival Can't End Insurer's Escape Bid in False Ad Suit

Good job on fighting until the end and bringing the truth in this case.

Comment URL copied!
Sam Pat
March 2nd, 2019 at 11:49 pm
Commented on: 190217

Assault bike - 45 minutes

Comment URL copied!
Lisa Stanley
February 17th, 2019 at 10:05 pm
Commented on: 190217

4 mile ruck with 28# - 1 hour

Comment URL copied!
Js Smith
February 17th, 2019 at 8:58 pm
Commented on: 190217

Made up 190215, results there.

Comment URL copied!
Samuel Stefanelli
February 17th, 2019 at 6:05 pm
Commented on: 190217

For Time

100 Wallball

50 TTB

Run 50m when breaking from round of WB & TTB

My rest days are Wednesday💪🏻

Comment URL copied!
Samuel Stefanelli
February 17th, 2019 at 6:05 pm
Commented on: 190217

For Time

100 Wallball

50 TTB

Run 50m when breaking from round of WB & TTB

My rest days are Wednesday💪🏻

Comment URL copied!
Katina Thornton
February 17th, 2019 at 12:53 pm
Commented on: Scientific Method: Statistical Errors

So the p-value gained it's elevated status in the midst of a statisticians' feud. Ego trumps science once again. I am at once reminded of Ancel Keys, but there are others tntc.

Comment URL copied!
Mary Dan Eades
February 19th, 2019 at 12:56 am

Yep, always a battle of the egos. As was so rightly pointed out by Max Planck,“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Or more succinctly, if somewhat sadly: 'Science advances one funeral at a time.'

Comment URL copied!
Katina Thornton
February 17th, 2019 at 12:31 pm
Commented on: We’re Using a Common Statistical Test All Wrong. Statisticians Want to Fix That.

Ron Wasserstein's example of the ASA's second principl:

"P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone." illustrates the misuse of the p-value in simple terms that even I can understand. This misuse of the p-value is prevalent in medical research. It's time to stop committing a Vizzini blunder!

Comment URL copied!
Chris Long
February 17th, 2019 at 5:06 am
Commented on: 190217

365-405-415-420-405-365-385-395-400lbs.

Comment URL copied!
Steven Thunander
February 17th, 2019 at 3:51 am
Commented on: 190217

Globo notes. For anyone at a globo, or used to work out at one, we know that sometimes the flow of the globo can disrupt our workouts. As always, if it is busy or there is a chance of the wod being disrupted, it is OK to use the piece of equipment needed by all first, then move on to other elements. The best course of action, however, if your schedule allows it is to go during off peak hours. For most gyms this is from 8 am to 3pm and after 8pm till 6am during the workweek. Weekend afternoons and evenings are also usually slow. College globo gyms are usually slowest in the mornings, so go then if using one of those.

Comment URL copied!
Alvin Fabre IV
February 17th, 2019 at 3:19 am
Commented on: 190217

29:26 225 bench 50 db moved really slow tonight

Comment URL copied!
Nathan Jenkins
February 17th, 2019 at 3:03 am
Commented on: We’re Using a Common Statistical Test All Wrong. Statisticians Want to Fix That.

"A p-value, or statistical significance, does not measure the size of an effect or the importance of a result."


One of the beautiful things about the CrossFit approach to quantifying fitness/work capacity is that it obviates the need for the 'post modern' scientific approach, as Coach Glassman put it on a recent comment thread. If you cut your time on a benchmark WOD in half, you've doubled your work capacity. No need to complicate it any further than that with P-values, effect sizes, regression analyses etc. Newtonian kinematics, it turns out, is more than sufficient as an analytic tool!

Comment URL copied!
Clarke Read
February 22nd, 2019 at 5:12 am

It's also one of the beautiful things about determining success in terms of pure, functional metrics - do you perform better, feel better, etc. Science is simpler, more compelling, and probably less likely to err when we define our outcomes this simply. We can clearly file interventions/approaches/etc. into meaningful buckets - does it work, does it not. And then either way we can move forward. We might make a mistake on occasion, but we can iterate and adapt so freely the cost of those mistakes is minimized.


Obviously big chunks of science aren't this clear...but this may be a critique, not a fact. Researchers try to precisely quantify the impact of very specific inputs on very specific outputs. Often this REQUIRES statistics because obvious results simply don't exist.


But I'd hope the output of this statistical discussion among the scientific community is not a new way of looking at statistics, but a step back to a scientific process where (as Wasserstein puts it) "thoughtful statistical and scientific reasoning" determine relevance - and that research that only looks significant because it passed a p-value threshold is minimized, not magnified. I think this could lead to better-designed studies, to boot.

Comment URL copied!