Will http://www.ncbi.nlm.nih.gov/pubmed/25525213 change deco procedures?

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

Why those models? Why this profile? No real world diver would dive either of those profiles.

Hello again,

It seems you may have missed something in David's explanation. VVAL18 is in use in the US Navy who, I believe, have real world divers. Partly, the trial was done in order to adopt the use of a bubble model (BVM3) in preference. They are both real world models, but especially VVAL18.

The problem, I believe, is getting past the belief that the algorithms we typically dive as technical divers are some sort of tested standard against which all other approaches should be judged; therefore, because the Navy test profiles look different they cannot possibly be relevant. Since nobody knows the incidence of DCS associated with the use of any of the algorithms we technical divers typically use for the same test dive profile (including the same exercise and thermal conditions) there is nothing to say that tech dive algorithms are any more valid than the NEDU ones, or that the length of decompression in the NEDU profiles was excessive. Indeed, it is self evident (given that DCS occurred in these profiles) that the decompression was not excessive. It is likely, based on simple decompression physiology, that had the NEDU dives been done on one of the shorter profiles you appear to favour, then the incidence of DCS would have been higher. David alludes to this in his explanation.

Now (and I believe this is an important point) in shorter profiles with shorter deep stops (which you advocate), even if the incidence of DCS was higher, the difference between the two profiles might have been smaller and harder to detect in a trial of practical size. That is why David talks about having profiles that are long enough to have substantial differences in distribution of deep and shallow stopping in the NEDU study. But the problem that makes deep stops problematic (protection of fast tissues from supersaturation early in the ascent at the expense of greater supersaturation in slower tissues later in the ascent) would still be there. It is simple physics: if the dives are the same length and you distribute stop time deeper, then this has to happen. This has been demonstrated in shorter tech dive relevant profiles by UWSojourner who performed a tissue supersaturation analysis of VPM-B+4 and GF 40:74 ascents from a typical technical rebreather dive. He links to this in his "move on to the thread" link in the post before this.

Another thing we are getting hung up on (I think), is the issue of test dives of identical length. If you want to isolate the effect of stop depth distribution on decompression efficiency then the test dives MUST be of equal length. Anyone can design a really safe decompression.... just make it longer - pad the shallow stops especially. No one is trying to say that you can't do deep stops because they are dangerous. You can make a decompression dive with deep stops really safe by compensating for them with longer the shallow stops. Whether doing the deep stops in this context offers any benefit is unknown. But the original promise of deep stop algorithms was that you could have shorter decompressions and even shorter shallow stops by "controlling bubble formation early" during the deeper stops. Put another way, the promise of deep stops was really efficient decompression. If deep stops made decompression more efficient then that would have been detected by the NEDU study. The point is, deep stops can certainly be used in a dive that has appropriate durations of shallow stops, but the benefit of doing them in this context (if any) is unknown. What they don't do is make decompression more efficient compared to a shallow stop dive of equal length.

I don't know whether this helps???

Simon M
 
Last edited:
But Simon you speak of efficiency as if it were absolute. It isn't.

Layman example.....
If you were trying to tune an engine for maximum performance a key concept is the fuel/air mixture. You can adjust that to be inefficient in either direction...too lean and you burn up the motor...too rich and you waste fuel. There is a sweet spot.

The same concept can apply to safe decompression. Protect the fast tissues too much(over doing the deep stops) and you sacrifice the slow tissues....conversely if you disregard the fast tissues to spare additional on gassing of the slow ones you will already have bubble formation in the fast tissues. Neither concept is bad as long as you compensate each model individually to produce the desired outcome.

The concepts don't work well when they are assumed to be interchangable based solely off of stop distribution over the same amount of time. If you make the deco longer...the shallow stop model will win, purely because the overall time was longer than it should have been.
If the deco is shorter, the deep stop model becomes advantageous.

Instead if adding an hour of deco, try subtracting 15 minutes from tried and true, widely used profiles. Do the same bottom time and omit deco based on a percentage across the board( we can even call it an aggressiveness setting) and see which model wins out. Calling a type of model "more efficient" based on one set of parameters is not scientific.
 
The tiny bit of science offered by Simon (here) has been shown to be invalid, using their very own math formula, and basic decompression theory.
I like the answer posted by graham-hk: how many ways do you need to be made to look stupid?
 
But Simon you speak of efficiency as if it were absolute. It isn't.

[.....]

The concepts don't work well when they are assumed to be interchangable based solely off of stop distribution over the same amount of time. If you make the deco longer...the shallow stop model will win, purely because the overall time was longer than it should have been.
If the deco is shorter, the deep stop model becomes advantageous.
Calling a type of model "more efficient" based on one set of parameters is not scientific.


David Doolette's definition of efficiency taken from previous quotation:

"To be clear about the purpose, methods, and outcome of the study, we need to be clear what is meant by decompression efficiency. The purpose of a decompression schedule is to reduce the risk of DCS to some acceptably low level. The cost of a low risk of DCS is time spent decompressing; efficiency relates to this cost/benefit trade off. In comparing two decompression schedules, if one could achieve the same target level of DCS risk with a shorter total decompression time than the other, the shorter schedule is more efficient."

Tom, you also mentioned decompression that is "widely used, tried and tested". If you are referring to technical dive schedules, we actually do not know in scientific terms how safe these schedules truly are. Anecdotes and self-reported technical dives are not sufficient. How many tech divers don't self report DCS symptoms? What we need are funds sufficient to run actual scientific comparisons of DCS rates in the algorithms tech divers use. Got any crowd source funding ideas?
 
Hello again,

It seems you may have missed something in David's explanation. VVAL18 is in use in the US Navy who, I believe, have real world divers. Partly, the trial was done in order to adopt the use of a bubble model (BVM3) in preference. They are both real world models, but especially VVAL18.

Yeah, I thought it was actually an algorithm specifically designed for rebreathers although we're talking here about an air dive with air decompression. This algorithm calculated 80 minutes more deco than the table in v.6 of the Navy manual and about that much more deco than mainstream algorithms. Would a Navy diver seriously do nearly 3 hours of deco for that dive? I make 20 minute dives to that depth on a regular basis (about once a week) and although we're decompressing on 50% we're out of the water in about an hour. Even without the 50% it would be about 90 minutes, I guess, but still not a run time of over 200 minutes. As I said, no technical diver would do that. Are you trying to suggest that the Navy divers ARE?

I've been playing a bit with it in Vplanner and in order to get a 200 minute run time in Vplanner you'd had to set it to +5. I guess if this is the level of conservatism the Navy is using I can follow the logic that lead to a 200 minute run time but I still can't understand for the life of me how they got that completely bizarre distribution of time using BVM3. Simply put, to my way of thinking either the algorithm is broken or they did something odd to manipulate the results in order to get that distribution. No bubble model I've ever seen would give an ascent that was that deep, not by a long shot. What this profile does with that deep ascent is basically turn the dive into a multi level dive with an hour of extra bottom time as compared to the other profile. The last 3 stops should have reflected that but they did not.

Therefore I think the researchers were comparing apples and oranges. They did one profile with an extreme level of conservatism that was a bucket profile and one profile that was essentially a multi level profile with the last stops not reflecting an hour of extra time deep.

I will admit that I wasn't aware of this model but just looking at what it did to calculate this dive, I'm thinking that it should have raised alarm bells with the researchers and they would have been well advised to include at least one other bubble model in their study. As it is they didn't and everything they did after calculating the profile was based on a broken profile. Apples and oranges.

The problem, I believe, is getting past the belief that the algorithms we typically dive as technical divers are some sort of tested standard against which all other approaches should be judged; therefore, because the Navy test profiles look different they cannot possibly be relevant. Since nobody knows the incidence of DCS associated with the use of any of the algorithms we technical divers typically use for the same test dive profile (including the same exercise and thermal conditions) there is nothing to say that tech dive algorithms are any more valid than the NEDU ones, or that the length of decompression in the NEDU profiles was excessive. Indeed, it is self evident (given that DCS occurred in these profiles) that the decompression was not excessive. It is likely, based on simple decompression physiology, that had the NEDU dives been done on one of the shorter profiles you appear to favour, then the incidence of DCS would have been higher. David alludes to this in his explanation.

I understand what you're saying but surely there must be a bigger body of evidence to support the utility of RGBM or VPM than there is for BVM3. I understand what you're saying that the Navy put their money on that profile but clearly the study should have looked outside that box. Just as the mainstream profiles are perhaps not the standard to which all dives should be judged, I would suggest that the same is true of Thalmann and BVM3. So you're suggesting that I'm caught in a paradigm and I'm suggesting that I'm not the only one!

Now (and I believe this is an important point) in shorter profiles with shorter deep stops (which you advocate), even if the incidence of DCS was higher, the difference between the two profiles might have been smaller and harder to detect in a trial of practical size. That is why David talks about having profiles that are long enough to have substantial differences in distribution of deep and shallow stopping in the NEDU study. But the problem that makes deep stops problematic (protection of fast tissues from supersaturation early in the ascent at the expense of greater supersaturation in slower tissues later in the ascent) would still be there. It is simple physics: if the dives are the same length and you distribute stop time deeper, then this has to happen. This has been demonstrated in shorter tech dive relevant profiles by UWSojourner who performed a tissue supersaturation analysis of VPM-B+4 and GF 40:74 ascents from a typical technical rebreather dive. He links to this in his "move on to the thread" link in the post before this.

I understood that when I read the article. The point I was trying to make (and still am) is that the profiles he generated are not indicative of what real world divers would do. If the starting point is faulty then your science can be outstanding but just like in computer science.... garbage in, garbage out.

Another thing we are getting hung up on (I think), is the issue of test dives of identical length. If you want to isolate the effect of stop depth distribution on decompression efficiency then the test dives MUST be of equal length.

I agree with that. However, it looks like the BVM3 profile has somehow been manipulated to artificially redistribute some of the time to such depths that the profile ceases to be a bubble-wrapped bucket profile and becomes a multi-level profile. There can be only two conclusions if you look at it like this. The researchers wanted that profile, in which case we're comparing apples and oranges as I said above, or BVM3 calculated that profile which is pretty much proof positive that it's broken.

I don't know whether this helps???

Simon M

It does, Simon, thank you. I know I'm sawing against the grain here and saying things that you might not like or want to hear but I started out saying that I think I must have missed something and I'm open to hearing what people are saying. Eventually I'll colour in all the white spots.

R..
 
Not sure if most of these posts shouldn't be moved to another thread since they are mostly discussing the NEDU study and not the original topic about the differences between nitrogen and helium. Nevertheless I will continue addressing the NEDU study as well.
It's a pity the discussion has been so spread out, with bits of information on different forums and threads.

It is true that the model used to generate the deep stops decompression schedule appears to be less common and to produce a somewhat different profile. Although, as it has been pointed out, we do not know if the ones being used more widely are better or not.

On the comparison between this model (BVM3) and the VPM-B +7, I'd like to ask if a proper comparison was made besides just plotting both and saying they look similar. Maybe a KS test? Because what looks similar may not be...

If let run freely for that depth and bottom time, what is the deco given by BVM3? Can't it be that forcing a total deco time, if longer than what the model would choose, be producing worse outcomes? And although I understand the attempt to not have two things varying at the same time (deco profile and total time), I am not convinced that 1) they are two variables, because a deco profile has both stop depth and time included. It may not be straightforward to force one 2) that it's not possible to compare outcomes when total deco times are different. Certainly if, the deep stop model would produce more DCS one could not be sure if it was due to the deep stops or the missing deco time, but what if it wasn't worse? Interesting case to study.

Connected to 1), how does the BVM3 model decide to allocate the forced deco time? Is there a weight for different compartments? Did it protect fast compartments at the expense of slow compartments, leading to increased DCS? Because one thing is controlling bubble formation, another is actually being increasing the bottom time. We all know for example that we should leave the bottom fast, at the recommended ascent rate and that slower is not better.

It also makes sense that it's not just the supersaturation that is important, but the period as well (I think this might have been looked at during the VPM-B+4 - GF 40/74 comparison, but I can't see the plots). A short higher supersaturation may generate less bubbles than a longer lower supersaturation. Are the models trying to minimise the peak value or the integral over time?
 
David Doolette's definition of efficiency taken from previous quotation:

"To be clear about the purpose, methods, and outcome of the study, we need to be clear what is meant by decompression efficiency. The purpose of a decompression schedule is to reduce the risk of DCS to some acceptably low level. The cost of a low risk of DCS is time spent decompressing;?

If he is saying that the longer the deco is, the lower the risk of DCS...I disagree. There IS such a thing as too much Deco.
 
Diver0001, it sounds like you don't understand what Navy divers do while underwater nor how they decompress. Neither are like our recreational technical diving but much more like commercial diving. For example, extreme work loads might be encountered, and deco might take place (mostly) on the deck of the support ship in a warm, dry habitat.
 
T,

I'm perfectly aware that the diving isn't the same. Human physiology, however, is, regardless of where the decompression is done. Even Navy divers are human.

I understand the need for additional conservatism when extreme workloads are encountered. I was not posting about that.

R..
 
There is a sweet spot.

Obviously there is and I have spoken about this before. A reductio ad absurdum argument in relation to shallow stops might be that the ultimate shallow stop approach is to come straight to the surface! We all know that placing that much emphasis on shallow stops would be disastrous. Clearly we need to stop, and there is a depth where, according to the truth in the universe, it would be optimal to make our first stop. That is the truth we seek. You assume it is prescribed by bubble models. Although we don't agree on that, I think we can agree that there is a sweet spot.

If the deco is shorter, the deep stop model becomes advantageous.

Tom, what do you base this assumption on, because as far as I can see that is all it is: an assumption based on bubble model dogma. What evidence do you have that it is true?

The NEDU study suggests that it is wrong. I know you struggle to see the relevance, but the path to understanding it has been clearly laid out in the RBW threads and I cannot help you further with that. In addition, there is a mounting body of data from other studies which do use "shorter deco" and numbers of venous gas emboli as the outcome measure that deep stop approaches to decompression risk increasing rather than decreasing bubble formation after surfacing. This includes the Blatteau study, Neal Pollock's field work with tech dives which he has presented at meetings but not yet published, and other work we are aware of. In summary, there is a clear signal in the available data that bubble models overdo deep stops, and literally nothing that points in the other direction. This is not to say that you can't use bubble models, but in a debate about whether they represent optimal practice there is little other than repetition of 15 year old dogma in support of them. In contrast there is mounting evidence in the other direction.

Simon M
 
https://www.shearwater.com/products/peregrine/
http://cavediveflorida.com/Rum_House.htm

Back
Top Bottom