Deep Stops Increases DCS

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

Status
Not open for further replies.
.

I like the DCIEM tables. Back in my bad old air diving days, I used these extensively.


*************

VPM-B was calibrated and checked to various points in ZHL-C and DCIEM 1994, both N2 and helium. VPM-B was also calibrated to match PADI NDL tables reasonably well.

Of course all four models have different ideas on how to do the same job, so alignment is transient, between all models concerned.

.

This is a "funny" thread. Regarding VGE and decompression algorithms, the following is lifted from Deco for Divers, Mark Powell, 2008, p 224:

The DSAT Recreational Dive Planner (PADI) model (1987)

The M-values used for the RDP were adopted from the Doppler bubble testing and tested by Dr Merrill Spencer and tested by Dr Raymond E Rogers, Dr Michael R Powell, and the colleagues with Diving Science and Technology Corp, a corporate affiliate of PADI. The DSAT M-values were empirically verified with extensive hyperbaric chamber and in water diver testing and Doppler monitoring.
 
As I said, it used existing data as a basis, and a proper measure of decompression stress, to come to its own conclusions. In many cases, VPM-B has the same or lower stress than other model equivalent of the same dive.
Which data?

What is a "proper measure" of decompression stress?
 
If too much VGE was found post dive during the DCIEM trials a few things happened. the diver was recompressed... Maybe they were concerned with VGE??

Then they changed the tables and retested... extensively under a variety of workloads and environmentalconditions to get a better grip on finding numbers that reduced VGE post dive to acceptable levels.

To even imply that VGE was unimportant to DCIEM us completely irresponsible and wrong.
 
What you get away with one day doesn't affect what you'll get away with on another day.

Poor example possibly. What I meant to address was NetDocs concerns over his students in NDL and the % risk of DCS for them. If you dive NDL with a 1000 people on the same profile some small percentage may have an issue but if you as an individual are fine today you'll probably be fine most of the time (identical dive profiles). Obviously aging and some other factors kick in but mainly trying to say that a 1% DCS risk on a given profile may not mean that if you dive it 100 times that you will get bent once, rather that 100 divers diving it, one may get bent.

Both the inter-individual variability that you are talking about RainPilot, and the intra-individual variability that PfcAJ is talking about are probably both important. The existence of intra-individual variability – and PfCAJ 's caution - contains an important safety message. The U.S. Navy has conducted a number of decompression trials in the last couple decades where a profile has been dived, practically identically executed, hundreds of times (the deep stops trial is one example). Because the number of subjects is limited, the same subject often dives the identical profile on several occasions – and the identical dive profile will result in DCS on some occasions at not others, in the same individual subject. That is evidence of intra-individual variability.

David Doolette
 
So its still a tissue pressure stress measurements first as a primary measure, with VGE as a secondary check only.
.

You are mixing up the real with the “imaginary”.

The only things that are measured in the development and validation of decompression procedures are the dive profiles (depth/time/breathing gas history, and maybe temperature and work rate) and the outcomes: DCS and/or VGE. The models and algorithms that connect the dive profiles to the outcomes are comprised of latent (i.e. unobserved) variables. These latent processes, such as tissue gas uptake and washout and tissue bubble formation, are what we believe are the important processes that lead to DCS, but they are not measured in the development and validation of decompression procedures (which why Leadduck called them “imaginary”). Sometimes these latent processes might be based on separate experimental observations (experiments in gels, animals, even humans), but often they are purely notional. For instance, the 635 minute half-time N2 gas exchange compartment in ZH-L16 is a fiction, there is no anatomy in the human body that would result in blood:tissue N2 gas exchange that slow. Decompression models are (or should be) just a means of connecting the dive profile and outcome data. Believing too much in the imaginary constructs of the models can lead you down a rabbit hole – which I think is we where we are.
 
Last edited:
Decoplanner didn't exist when the doppler studies were being conducted on WKPP divers. Most of the guys were using DECOM (straight ZHL-16) for cutting tables, and then the "classical" RD (not this new-fangled mumbo-jumbo) to tweak the profiles, but a few also had access to tables generated by Hamilton.
The earliest references I've found to VPlanner being used by divers is in 2005.
The NEDU study was presented in 2008 IIRC.
That's at least three years of "allegiance" to bubble models before they were challenged by NEDU's study.

Looks like Multideco hit the market in 2011 or thereabouts.

Way more customers using VPlanner. Certainly I have noticed this on dive boats here.

*edit* Obviously I'm speculating, because nothing really explains the obfuscation and belligerence that Ross has been posting in the RBW thread and here, towards the NEDU study, Dr Mitchell, Dr Doolette and UWSoujourner. For years.
One of the problems I see is that not everyone is working from the same exact definition of a "Deep Stops". I wish the industry would agree to a standard set of defined terms. Then, whether you still subscribe to deep stops or not....at least we are all talking the same language.


Now a post I've been working on for a while... that answers some of these questions. Time for a bit of history..... I guess many here do not have access to 15 or 20 years of deco history of deep stops in tech world., and it is hard to track down. So.. here is a quick collection of bits.


Erik Baker made his GF modification to ZHL in the mid 90's and he did it primarily to help JJ and friends with their long cave exploration deco, as described above. GF use has since grown to all areas of diving, and now has its used well beyond its design or purpose.

The Richard Pyle method was made known in the mid 90's, and this link has a lot of history in that doc too.


By about 2000, GF was made available in the GUE DecoPlanner program. Also GAP software, at about the same time, had GF and RGBM.


VPM started in the 70's, but the full modern VPM was developed through the late 90's and 2000's. It was done mostly by Eric Maiken and then on the VPM-List (later called DecoList) with Erik Baker and Prof Yount as well and many others. It took several years.


The DecoList was a Usenet mailing list group, (how things worked before web forums) and it had many of the important researchers and scientist in deco at the time. Also senior training and other interested people too, on this list and all talking and working together. We keep an archive of most of the DecoList that can looked over here. It has loads of fascinating conversation on deco and tech subjects. But note two names who are missing from the list - Mitchell. Yeah... that's probably why they don't seem to know the real history of VPM.


V-Planner first came out in late 2001, and by early 2003, it had changed to VPM-B and its been the standard VPM ever since. VPM history is detailed here:


If you bought a tech dive computer back then, it was probably a VR3 with GF and Pyle stops, or an Explorer from Hydrospace that had RGBM in it.


As the years went on, GF was getting more use, but strictly in bubble model format, or Pyle stop format. VPM-B was getting more trust and popular too. No one was was doing red raw ZHL dives.

2005(?) was the first Shearwater GF.

2007 was the V-Planner Live Liquivision X1 computer.


**********

2008:

In 2008, the Undersea Hyperbaric Medical Society (UHMS) held a 2 day workshop called "Decompression and the Deep Stop workshop", two days before its annual meeting. It was chaired by Simon Mitchell (who was also an officer of UHMS at the time).

Many of the worlds deco researchers and peers were in attendance. Much relevant data and reports of interest were presented. The nedu test was one of the reports under review. In the follow up questions, the nedu test received a great deal of criticism, for all the same reasons I have mentioned in these threads. None of the peers gave it a favorable comment.

I want to make something very clear. All of Simon Mitchell's "growing list of evidence" that he claims today, was presented and discussed in this workshop.

At the end of the workshop, there was a consensus discussion to resolve two summary statements (Simon Mitchell was chair): All of the consensus discussion pages are here:

The two summary questions:



2/ The Efficacy of a Deep Stop?

Consensus;​

consensus_2.jpg



So there you have it. A peer review of all matters to do with deep stops and tests and data to date, resolved to the above.

If you read the consensus pages, you can see Simon obviously wasn't satisfied with the peer position in 2008.


*********

Fast forward to now. No new research, no new data, same old Nedu test.

Most of the people involved in tech model design move on, leaving the door wide open to......


One person is not happy with the peer position, So he does an end run around the peer process. He has taken his preferences direct to the public, to forums, to youtube were he makes his personal preference statements known, without the worry of a peer review or a peer challenge in public.


And that's where we are now.


*************
 
Last edited:
Now a post I've been working on for a while... that answers some of these questions. Time for a bit of history..... I guess many here do not have access to 15 or 20 years of deco history of deep stops in tech world., and it is hard to track down. So.. here is a quick collection of bits.


Erik Baker made his GF modification to ZHL in the mid 90's and he did it primarily to help JJ and friends with their long cave exploration deco, as described above. GF use has since grown to all areas of diving, and now has its used well beyond its design or purpose.

The Richard Pyle method was made known in the mid 90's, and this link has a lot of history in that doc too.


By about 2000, GF was made available in the GUE DecoPlanner program. Also GAP software, at about the same time, had GF and RGBM.


VPM started in the 70's, but the full modern VPM was developed through the late 90's and 2000's. It was done mostly by Eric Maiken and then on the VPM-List (later called DecoList) with Erik Baker and Prof Yount as well and many others. It took several years.


The DecoList was a Usenet mailing list group, (how things worked before web forums) and it had many of the important researchers and scientist in deco at the time. Also senior training and other interested people too, on this list and all talking and working together. We keep an archive of most of the DecoList that can looked over here. It has loads of fascinating conversation on deco and tech subjects. But note two names who are missing from the list - Mitchell and Doolette. Yeah... that's probably why they don't seem to know the real history of VPM.


V-Planner first came out in late 2001, and by early 2003, it had changed to VPM-B and its been the standard VPM ever since. VPM history is detailed here:


If you bought a tech dive computer back then, it was probably a VR3 with GF and Pyle stops, or an Explorer from Hydrospace that had RGBM in it.


As the years went on, GF was getting more use, but strictly in bubble model format, or Pyle stop format. VPM-B was getting more trust and popular too. No one was was doing red raw ZHL dives.

2005(?) was the first Shearwater GF.

2007 was the V-Planner Live Liquivision X1 computer.
That was a very warm, heartfelt :heart: eulogy. Touching. Thanks. RIP VPM.
 
Last edited:
As is bleedingly obvious, I'm no decompression scientist. However, I like to believe that I have a decent grasp of the basic requisites for a good model (I'm talking about mathematical models, so let's keep Cindy Crawford - who incidentally, in addition to being a supermodel, also have a degree in Chem.eng. so she's by far the most famous model in chemical engineering - et al. out of this. And yes, that was a pun).

A model is formulated to fit a theoretical equation (or algorithm) to experimental data, for prediction purposes. The data we use to build our model on are called the "fitting data". One important rule of modeling is that one should exercise great care using the model outside the limits of the data it's based on. Another important rule is that the robustness of a model can't be fully appreciated by the goodness of the fit to the fitting data. One should always generate new data independent of the fitting data to check how well the model fits these data. Those data are called the validation data. A model is poor if it hasn't been validated with independent data, preferably produced using a different protocol than the fitting data. And if the model is adjusted using a new set of data, another set of data should be generated for validation since the previous validation data now have become a second set of fitting data.

A model can be a purely empirical model ("let's see which randomly selected equation that fits the data best"), a first principles model ("let's work out an equation that describes the behavior, based on what we know about the physics going on"), or a semi-empirical model (basically a compromise between the two former types). To my limited understanding, Bühlmann-type decompression models are semi-empirical (the different compartments are hypothetical and don't correspond to any specific tissues, but the phenomena described in the treatment of these compartments and their behavior is established physics), but they are generally based on a pretty extensive set of empirical data (dating back to J.S. Haldane himself), produced by several different protocols. This is a sign of robustness.

So far, I've learned that the bubble models are based on postulated phenomena that haven't been observed (they are somehow based on preventing microbubble growth, but according to the owner of perhaps the most popular bubble model, physical, e.g. measurable, microbubbles in real divers aren't relevant). They also haven't been validated by new datasets, but they have been "calibrated" against other models. I'm still wondering on what basis that calibration is, though. To me, that doesn't indicate a very robust model, if not for anything else so at least based on the basic rules for modeling.

Something that is a little ironic to me, is that I've tried to learn which data the perhaps most popular bubble model is based on. I haven't been able to get an answer to that. I've tried to learn how that model has been calibrated, and what it has been calibrated against. I haven't been able to get an answer to that. I've been told that the perhaps most popular bubble model is based on "a proper measure of decompression stress" and that this bubble model "has the same or lower stress than other models", but I haven't been able to get an answer to what a "proper measure of decompression stress" is. What really makes the irony is that the first rule of peer review is that in order to have a proper peer review, methods and protocols must be disclosed to one's peers. The same person who critizes his opponent for avoiding peer review refuses to disclose or avoids disclosing his datasets, his methods and his protocols.

(tl;dr version of the last paragraph: Pot, meet kettle.)

BTW, I've also been told that "Only military tables have pDCS ratings", while I'm pretty sure that at least the DSAT/PADI RDP table (which I until now believed is not a military table) is generated for a lower P(DCS) than the US Navy table. You learn something new every day...
 
Last edited:
...BTW, I've also been told that "Only military tables have pDCS ratings", while I'm pretty sure that at least the DSAT/PADI RDP table (which I until now believed is not a military table) is generated for a lower P(DCS) than the US Navy table. You learn something new every day...

I've never seen the probability of DCS for DSAT, but, it is more conservative than the 2008 USN tables, which were made slightly more conservative than the previous iteration at 70 and 80 feet. DSAT/PADI RDP are from 1987! PZ+ is Pelagic Pressure Systems version of a detuned Buhlmann algorithm, details unknown.

upload_2016-8-17_15-58-14.png
 
I've never seen the probability of DCS for DSAT, but, it is more conservative than the 2008 USN tables, which were made slightly more conservative than the previous iteration at 70 and 80 feet. DSAT/PADI RDP are from 1987! PZ+ is Pelagic Pressure Systems version of a detuned Buhlmann algorithm, details unknown.

View attachment 379632
An explanation for the apparent conservatism of the DSAT tables is in order. They were intentionally and artificially made more conservative with an eye to repetitive diving. What follows is from my memory of having to read all of this years ago when I became an instructor, so if I err in any detail, I will appreciate being corrected.

Before the creation of the DSAT tables, the U.S. Navy tables were the ones most commonly used. To determine surface intervals for repetitive diving, they used the 120 minute compartment as the basis. That 120 compartment then the longest of the compartments, and was added to the Navy design by Workmann. IIRC, his use of it for repetitive dive planning was somewhat arbitrary. It really did not matter all that much to the Navy, since they were usually only doing one dive a day anyway.

It mattered greatly to the recreational diving community, though. That surface interval schedule was keeping divers out of the water between dives for a very long time, creating a problem for dive operations. The research that led to the DSAT tables focused in part on the question of the appropriate compartment to guide surface intervals on dives that did not require decompression, the kind of dives being done by the recreational diving community. That research indicated that the 40 minute compartment could be used for such dives. In building the table, though, they decided to go a little more conservative and selected the 60 minute compartment. They then made the first dives a bit shorter, which helped shorten the required surface interval even more. That is the real impact of the DSAT tables--DSAT divers on a two tank dive could get back in the water for the second dive sooner and stay longer at a given depth than could divers using the U.S. Navy tables.
 
Last edited:
Status
Not open for further replies.
https://www.shearwater.com/products/perdix-ai/
http://cavediveflorida.com/Rum_House.htm

Back
Top Bottom