Software Suggestions

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

Probabilistic decompression models have some fundamental differences to deterministic decompression algorithms (e.g. ZH-L16 or VPM-B) that result in quite different looking schedules. Probabilistic schedules will not always produce monotonically increasing stop times and will even skip stops. There are two parts to the explanation of this behavior. First, in typical deterministic decompression algorithms comprised of a collection of compartments with different half-times that represent potential DCS-sites (e.g. ZH-L16 or VPM-B), at any point in time the decompression is determined by one controlling (or leading) compartment, and shallower decompression stops get longer as control is passed to successively slower half-time compartment. This produces useful schedules, but it does not make physiological sense; bubbles can exist (either growing or shrinking depending on prevailing conditions) in any compartment that has been supersaturated, and every such DCS-site contributes to the risk of DCS whenever it contains bubbles. This is formalized in probabilistic models where the probability of DCS is one minus the joint probability of no injury in all compartments. Therefore, all compartments can contribute to the probability of DCS at all times, and consequently all compartments can control stops throughout decompression. Second, there are a variety of ways to implement a probabilistic decompression algorithm, but all of them involve calculating the probability of DCS out to the end of risk accumulation, i.e. through the whole schedule and out to some long time after surfacing. In other words, unlike deterministic algorithms which calculate each stop completely independent of what is going to happen next, a probabilistic algorithm has to take into account what is going to happen next. Therefore a probabilistic algorithm might, for instance, find that extra time at the first stop will allow subsequent stops to be shorter. The easiest example of this is if scheduling decompression that will have gas switches to higher oxygen fraction, the probabilistic algorithm ‘knows’ they are coming and therefore might find the best schedule is to skip the stop before the switch in favor of getting on to the higher oxygen fraction gas.


All this makes people who are only familiar with traditional algorithms and schedules uncomfortable until they try it and find it works.


David Doolette

Thank you David.


I'd like to point out some similarities, with the Probabilistic implementation description above, and existing VPM implementation. The VPM model implementation has some characteristics in common with probabilistic implementations:


VPM has consideration of the whole ascent and surface portion, on every dive. VPM does this through doing several iterations of the whole ascent, and comparing them on the basis of overall critical volume (supersaturation), for the entire dive and surface supersaturation period. It resolves the final ascent based on this collective result. This also gives VPM the broader characteristic of considering what is to come next, as described above for probabilistic models. The example above of higher O2 at the end of a schedule may lead to reduced times in the earlier stops, is also typical of VPM, if seen as beneficial to the whole ascent. However, VPM does fall back to conventional monotonically increasing stop times, as its built to work that way.


Of course VPM is not currently tied to a known risk number as yet, but it seems the underlying model implementation design is sufficiently sophisticated to be able to be expanded for a risk value input.

.
 
...//... All this makes people who are only familiar with traditional algorithms and schedules uncomfortable until they try it and find it works.


David Doolette
Quite so.

I come from a different direction, I never truly "groked" statistics, the entire concept drove me crazy. A behavioralist co-worker told me to take stats as offered by the social sciences department and then we will talk. I did. Mind-bending. You can't ever withhold treatment.

My gold standard of removal of the change and observation of the re-occurrence of the malady is unethical. There is a whole subset of statistics that deals with this alone.
 
Thank you David.


I'd like to point out some similarities, with the Probabilistic implementation description above, and existing VPM implementation. The VPM model implementation has some characteristics in common with probabilistic implementations:


VPM has consideration of the whole ascent and surface portion, on every dive. VPM does this through doing several iterations of the whole ascent, and comparing them on the basis of overall critical volume (supersaturation), for the entire dive and surface supersaturation period. It resolves the final ascent based on this collective result. This also gives VPM the broader characteristic of considering what is to come next, as described above for probabilistic models. The example above of higher O2 at the end of a schedule may lead to reduced times in the earlier stops, is also typical of VPM, if seen as beneficial to the whole ascent. However, VPM does fall back to conventional monotonically increasing stop times, as its built to work that way.


Of course VPM is not currently tied to a known risk number as yet, but it seems the underlying model implementation design is sufficiently sophisticated to be able to be expanded for a risk value input.

.

Some similarities between the initial evaluations of the whole schedule in VPM-B that you describe and probabilistic algorithms. The key difference that explains the behavior noted by JohnnyC, is that in VPM-B, based on the evaluation you describe, an ‘allowed supersaturation rule’ is set at the beginning of the ascent, and then the length of each stop is based on not violating the ‘allowed supersaturation rule’ upon travel to the next stop only. In the probabilistic decompression model schedule search algorithm used to generate the schedule JohnnyC referenced, there is no ‘allowed supersaturation rule’ that is followed, the duration of every stop is evaluated on how it fits into the whole schedule and influences the final probability of DCS. So the algorithm might, for instance, find the optimum schedule skips some deeper stops, allowing substantial, but short lived supersaturation / bubble growth in a fast compartment to avoid gas uptake into a slower compartment that will result in longer lived supersaturation / bubble growth later. That accounts for the gas switch example I gave, and JohnnyC’s example. In the latter schedule, the algorithm would have evaluated (among many options), 1-minute stops at 90 and 80 fsw, and a 2-minute stop at 70 fsw, but found it was better (under the model) to skip the 90 and 80 fsw stops and put all that time (4 minutes) at 70 fsw.

David Doolette
 
I use Multi-Deco on Windows and MacOS, and PastoDeco Pro on Android. They generally agree (within 1 minute or so).

As far as comments from the author of M-D go, it is my understanding (could be totally wrong), that he wrote M-D, but he did not write the actual libraries that M-D uses to do the Buhlmann calculations. If that is correct, then I would take his comments as what they are - comments about someone else's programming. It's not the author running down his own code.

Is there a good planning tool for Linux (which I am migrating myself to)? The Multi-Deco site lists it as an option, but there's no download link and the author's comments in their forum is that they're aren't going to do a Linux version (at least, not any time soon).

No need for a specific Linux Multi-Deco version as it works perfectly with Wine. Been using it for 2 years on my laptop running Linux Mint.
 
Thanks for the link David!

I'm interested in trying to code up an implementation of this model; mostly for personal interest. Before I start reading, can you say off the top of your head if the paper includes everything necessary to create a complete implementation? Also, are there any licensing or IP considerations that need to be considered?

There are couple of minor typos in some of the equations, but careful reading should pick those up.

I LOVE this. "Spot the deliberate mistake" to make sure we pay attention. :)
 
Last edited:
The underpinnings of this thread have become most intriguing to me.

While I labor through this (http://archive.rubicon-foundation.org/4975) reference that was kindly given to me by Dr. Dolette, I began to wonder why the same was not done more comprehensively from a recreational standpoint in the private sector. I believe that it is now obvious (
Human Experimentation: An Introduction to the Ethical Issues)

This "concern" is fortified by the general approach to the problem that I hear over and over again: one finds a deco schedule that works for YOU. Human experimentation at its finest...
 
If you are interested in probabilistic decompression models there is a much bigger literature in NMRI and NEDU technical reports available on Rubicon. LE1 is quite dated now, and in particular it does not deal well with high oxygen fraction. A more relevant model would the LEM (Linear-Exponential Multi-gas) model that has been parameterized for MK 16 MOD 1 heliox closed-circuit rebreather diving, described in NEDU TR 02-10 http://archive.rubicon-foundation.org/3548 There are couple of minor typos in some of the equations, but careful reading should pick those up. This model was developed specifically for heliox CCR diving to depths of 300 fsw and 999 constant 1.3 atm PO2-in-helium man-dives went into the development and validation; about half of these dives were added to the “he8n25” calibration data set, which was 4469 man-dives from various sources, and half were validation of schedules produced by the final model. That 999 is more dives than in the development and validation of ZH-L16, and far more than in the depth range relevant to technical diving (most ZH-L16 development dives were at 98 fsw or deeper than 500 fsw). LEM-he8n25, although developed for constant 1.3 atm PO2-in-helium, it seems quite robust, having been used for quite range of heliox, trimix, and heliox-to-nitrox gas switching experiments at NEDU. LEM-he8n25 is, like all models, not perfect, but works well.

Great, thanks a lot. The model description in Appendix A looks fine. I have a question about the parameters in Table 4: All tissue solubilities of O2 and He are zero. That doesn't fit to the other parameters and to some of the equations (A3.b, A21) and I don't understand why. Is this really the used set of parameters?
 
There is also my plan, which is written in Java and works under Linux, Windows or OSX quite happily. For cellphones running Android, there is a version of it called aScuba.
 
VPM has consideration of the whole ascent and surface portion, on every dive. VPM does this through doing several iterations of the whole ascent, and comparing them on the basis of overall critical volume (supersaturation), for the entire dive and surface supersaturation period. It resolves the final ascent based on this collective result. This also gives VPM the broader characteristic of considering what is to come next, as described above for probabilistic models.

Of course VPM is not currently tied to a known risk number as yet, but it seems the underlying model implementation design is sufficiently sophisticated to be able to be expanded for a risk value input.

I addition to the differences pointed out by David, VPM's iterative "looking ahead" only works up to a point and has a very specific purpose. The purpose is not to solve for an optimal schedule (as in probabilistic models), but to relax VPM's supersaturation limits in order to force VPM to better intersect with no decompression limits (i.e. it's designed to help with the "lighter deco" side of things).

Once the critical volume adjustment wears off during more substantial dives, there is really little effect of the look ahead and the supersaturation gradients are fixed. As discussed here and here, this feature implies steadily increasing risk as dives lengthen and go deeper. To avoid this steadily increasing risk divers need to compensate (e.g. by adding shallow stop time).
 
I know this thread hasn't had much action lately, and I read the whole thing!!, but just wanted to say thanks for Subsurface. I've been looking for something that works on Linux and was able to download my Petrel logs via bluetooth and also employ the dive planner for Trimix schedules. Can pass that log back to my Windows laptop for on-the-road planning.

Very happy.
 
https://www.shearwater.com/products/perdix-ai/
http://cavediveflorida.com/Rum_House.htm

Back
Top Bottom