Gradient factors - deep stops thread in DIR forum

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I'm not as comfortable as you with the premise that staying within the deco schedule (whether straight ZHL or as modified with GF) will always perfectly compensate for ongassing during deep stops (so as long as you are willing to extend deco, as will be required). In theory, of course, that is a true statement -- I understand that -- but only if one assumes perfection of the model.

So then don't argue gradient factors at all: they are part of the imperfect model that you are not comfortable with, so why bother?

Conversely, if you do argue within the framework of the imperfect model because it's the best one we have, then gradient factors will never take you to where DCS is expected. (Unlike the "deep stop" profiles that do.)

Trying to have it both ways is what adds to the confusion IMO.
 
It seems reasonable to define "efficiency" as comparing various ascent strategies that all use the EXACT same ascent time.

Along those same lines, I have seen anecdotal evidence of some divers who use an accelerated ascent rate (maybe 60 - or more) fps for the initial portion of the ascent, maybe for the first 30 or 50 feet (depending on depth of course). I wonder how this practice could be examined?

I wonder if, instead of ascent time one were to set an acceptable chance of a DCS hit, let’s say 0.1% per dive, as the constant. What would then the shortest deco profile that satisfies that requirement be? Or, the profile that consumes the least amount of gas? Could the former result in a preference for deeper stops? I doubt it, but have no proof.
 
So then don't argue gradient factors at all: they are part of the imperfect model that you are not comfortable with, so why bother?

Conversely, if you do argue within the framework of the imperfect model because it's the best one we have, then gradient factors will never take you to where DCS is expected. (Unlike the "deep stop" profiles that do.)

Trying to have it both ways is what adds to the confusion IMO.

I cannot help it if a certain amount of complexity (not confusion) is required when discussing this subject. You're attempting to impose a black and white (deserved/undeserved, expected/unexpected) paradigm on something more complex. It is sophistry to suggest that one cannot both embrace ZHL/GF while recognizing it is not perfect - and therefore, within the bounds of the model, be thoughtful about what GF-Low we use.

Your argument would suggest the only figure that ever matters to outcomes is GF-high: GF-low is irrelevant except that it defines how much time is wasted, or not, on deco. You are suggesting that (so long as there were no ceiling violations along the way) every dive that ends with the same surfacing gradient presents an equal risk of DCS no matter the distribution or timing of the deeper stops. I think that is a leap that is unwarranted. It is also contrary to the growing consensus that we should be diving GF lows of 40, 50, or even a bit more, as opposed to 20 or 30.
 
Not totally related but I recently watched Gamechangers on Netflix, which although interesting in its own right, demonstrated clearly the effect of animal products incl dairy on the liquidity of the blood stream.

It seems entirely possible that if a plant based diet can significantly improve athletic ability due to the non-negative impact on the bloodstream, maybe there is something to read across to decompression stress/diving.
 
Thanks @Dr-simon-mitchell for pointing out the difference. There are simply two competing strategies: The practical diver would like to fix a certain probability of a a DCS hit that is considered acceptable (maybe 1 in a few thousand dives) and find the shortest deco schedule (maybe with other boundary conditions like available gases) that realises that risk. On the other hand, in empirical testing, you measure the risk for a (or maybe a few) given schedule(s) (as this probabilistic this already requires many dives under controlled conditions). The two things are related but not necessarily in a simple manner, they consider different variables dependent. And this is what causes the confusion.

Let me add that "The original development of these models contained no testing of actual bubble formation in biological systems - it was all just physical theory." from my perspective as a theoretical physicist one might have doubts that the physics in these models is really in the best possible way. I have written about this elsewhere, like Search Results for “VPM” – The Theoretical Diver
 
This is all too common a story.
How does one avoid this delay and get into a chamber sooner?
Better medical training and continuing to insist that the Medical people speak to DAN?
 
There are simply two competing strategies: The practical diver would like to fix a certain probability of a a DCS hit that is considered acceptable (maybe 1 in a few thousand dives) and find the shortest deco schedule (maybe with other boundary conditions like available gases) that realises that risk.

How far are we from being able to have such a planner (one which is at least somewhat backed up by science)?
Your blog article Fraedrich follow-up – The Theoretical Diver
already has a very interesting graph with GF pairs and relative DCS risk plotted. That plot is the best answer to your practical diving strategy I have seen this far.
 
http://cavediveflorida.com/Rum_House.htm

Back
Top Bottom