Software Suggestions

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I think this is a good point. We'd have to run the numbers to see, and then make a blind leap as to whether supersaturation time time in compartment n is more or less important than the similar integral in compartment n+1. If it is not a blind leap please point me at why as I would be VERY interested.
I don't think it's as blind a leap after the NEDU study as it might have been before.

A very important post in this whole debate is here. Dr. Mitchell describes the relative importance of "emphasizing the fast compartments" and I think its worth reading.

Based on the analysis there and the clearly better profile (NEDU's shallow profile) deemphasizing the integral supersaturation of the faster compartments, I don't think its much of a leap at all to conclude that deemphasizing deeper stops is likely a good move. I think describing that kind of an adjustment as a "blind leap" misrepresents what we know.
 
Last edited:
GF is deceptively accessible 'dial a profile', they ought to be more conservative than raw ZHL-16C, but without testing how do we know? We are back to the same place as arguing over VPM etc.

There is at least some evidence that X/70 is more conservative than X/85 in some Doppler studies carried out in Grand Caymen about 4 years ago. I believe those may have been published in the massive CCRs in scientific diving document Simon has posted before. But that pdf isn't searchable, so I can't readily find those data or if they even exist there. The study was discussed on CCRx at one time but I have yet to see the publication. Hopefully someone knows where they are.
 
Please see this paper:
http://www.diverbelow.it/attachments/article/131/Thalmann et alii. Improved probabilistic decompression model risk prediction using linear-exponential kinetics.pdf

The weighted sum of integral supersaturation is in eq.9, with weights Gn. The weights, thresholds and half-times got fitted by experimental data. I wrote a program for the EE1 model and an optimizer to find the profile with shortest runtime for a given maximum DCS risk. The resulting profiles were very shallow. Maybe it's a bug, I have no reference data to compare and test.
This is something I have been wanting to do, with a view to calculating risk numbers for my own actual logged dives. I will stare at it some more and maybe knock up some code.

Why did you choose ee1 when ee2 and le1 did a bit better?
 
I don't think it's as blind a leap after the NEDU study as it might have been before.

A very important post in this whole debate is here. Dr. Mitchell describes the relative importance of "emphasizing the fast compartments" and I think its worth reading.

Based on the analysis there and the clearly better profile (NEDU's shallow profile) deemphasizing the integral supersaturation of the faster compartments, I don't think its much of a leap at all to conclude that deemphasizing deeper stops is likely a good move. I think describing that kind of an adjustment as a "blind leap" misrepresents what we know.

RBW isn't letting me in to check the link, I will have read it at the time...

To be clear. My leap point applies to comparing the impact of supersaturation integrals between compartments. Reading the Thulman paper they use three compartments and derive relative weights for those by statistical means. I am not disputing the conclusion that deep stops are over done.

I'd like to be able to take those supersaturation graphs and build software which compared profiles for risk by integrating over the supersaturation over the different compartments. I can't, and the reason I can't is that the numbers which come out of one compartment need to be scaled with respect to the other compartments and I do not have those constants.

That also means that while looking at the supersaturation graphs is a good guide, and helps explain the results, it is not a definitive answer to relative risk. I will eventually read the link above and see if it informs me better.
 
If you are interested in probabilistic decompression models there is a much bigger literature in NMRI and NEDU technical reports available on Rubicon. LE1 is quite dated now, and in particular it does not deal well with high oxygen fraction. A more relevant model would the LEM (Linear-Exponential Multi-gas) model that has been parameterized for MK 16 MOD 1 heliox closed-circuit rebreather diving, described in NEDU TR 02-10 http://archive.rubicon-foundation.org/3548 There are couple of minor typos in some of the equations, but careful reading should pick those up. This model was developed specifically for heliox CCR diving to depths of 300 fsw and 999 constant 1.3 atm PO2-in-helium man-dives went into the development and validation; about half of these dives were added to the “he8n25” calibration data set, which was 4469 man-dives from various sources, and half were validation of schedules produced by the final model. That 999 is more dives than in the development and validation of ZH-L16, and far more than in the depth range relevant to technical diving (most ZH-L16 development dives were at 98 fsw or deeper than 500 fsw). LEM-he8n25, although developed for constant 1.3 atm PO2-in-helium, it seems quite robust, having been used for quite range of heliox, trimix, and heliox-to-nitrox gas switching experiments at NEDU. LEM-he8n25 is, like all models, not perfect, but works well.
 
Last edited:
David,

A crumb or two for the commoners... Still trying to digest the 300+ page report you put me on to. OMG. I'm all about bailouts and survivability. Never been bent, never plan to be.

How would one determine the deco that can be "blown off" from that info? ....... OK, Unfair question.

Maybe more on point, in the absence of lawyers, where would you place the LD50?
 
LEM-he8n25, although developed for constant 1.3 atm PO2-in-helium, it seems quite robust, having been used for quite range of heliox, trimix, and heliox-to-nitrox gas switching experiments at NEDU. I use it to plan my own technical diving. LEM-he8n25 is, like all models, not perfect, but works well.
Sounds interesting
Is there software/code for this available somewhere?
 
I find it interesting that the trimix/heliox efficiency experiment had a longer first stop than the proceeding two stops. It was planned with LEM-he8n25.

BTW it was one of David Doolette's experiments, found here: http://archive.rubicon-foundation.o.../123456789/10576/NEDU_TR_15-04.pdf?sequence=1

"200 fsw for 35 minutes*

70 ft - 4 minutes
60 ft - 2 minutes
50 ft - 2 minutes
40 ft - 6 minutes
30 ft - 16 minutes
20 ft - 89 minutes

TST = 119 minutes

Divers breathe from MK 16 MOD 1 for 30 minutes prior to starting compression. Descent rate 40 fsw/min. Ascent rate 30 fsw/min.
*Time at Bottom in minutes does not include descent.
†Stop time does not include travel to stops."
 
Sounds interesting
Is there software/code for this available somewhere?
No there is not. I put out the pointer to the descriptions of the models because Leadduck was implementing a similar model from the 1997 paper, and LEM is more relevant for technical divers, and there are published schedules with which to compare.

David
 
Last edited:
I find it interesting that the trimix/heliox efficiency experiment had a longer first stop than the proceeding two stops. It was planned with LEM-he8n25.

BTW it was one of David Doolette's experiments, found here: http://archive.rubicon-foundation.o.../123456789/10576/NEDU_TR_15-04.pdf?sequence=1

"200 fsw for 35 minutes*

70 ft - 4 minutes
60 ft - 2 minutes
50 ft - 2 minutes
40 ft - 6 minutes
30 ft - 16 minutes
20 ft - 89 minutes

TST = 119 minutes

Divers breathe from MK 16 MOD 1 for 30 minutes prior to starting compression. Descent rate 40 fsw/min. Ascent rate 30 fsw/min.
*Time at Bottom in minutes does not include descent.
†Stop time does not include travel to stops."

Probabilistic decompression models have some fundamental differences to deterministic decompression algorithms (e.g. ZH-L16 or VPM-B) that result in quite different looking schedules. Probabilistic schedules will not always produce monotonically increasing stop times and will even skip stops. There are two parts to the explanation of this behavior. First, in typical deterministic decompression algorithms comprised of a collection of compartments with different half-times that represent potential DCS-sites (e.g. ZH-L16 or VPM-B), at any point in time the decompression is determined by one controlling (or leading) compartment, and shallower decompression stops get longer as control is passed to successively slower half-time compartment. This produces useful schedules, but it does not make physiological sense; bubbles can exist (either growing or shrinking depending on prevailing conditions) in any compartment that has been supersaturated, and every such DCS-site contributes to the risk of DCS whenever it contains bubbles. This is formalized in probabilistic models where the probability of DCS is one minus the joint probability of no injury in all compartments. Therefore, all compartments can contribute to the probability of DCS at all times, and consequently all compartments can control stops throughout decompression. Second, there are a variety of ways to implement a probabilistic decompression algorithm, but all of them involve calculating the probability of DCS out to the end of risk accumulation, i.e. through the whole schedule and out to some long time after surfacing. In other words, unlike deterministic algorithms which calculate each stop completely independent of what is going to happen next, a probabilistic algorithm has to take into account what is going to happen next. Therefore a probabilistic algorithm might, for instance, find that extra time at the first stop will allow subsequent stops to be shorter. The easiest example of this is if scheduling decompression that will have gas switches to higher oxygen fraction, the probabilistic algorithm ‘knows’ they are coming and therefore might find the best schedule is to skip the stop before the switch in favor of getting on to the higher oxygen fraction gas.


All this makes people who are only familiar with traditional algorithms and schedules uncomfortable until they try it and find it works.


David Doolette
 
http://cavediveflorida.com/Rum_House.htm

Back
Top Bottom