Computer vs Algorithm

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I bought my first computer (Suunto Gekko) based on my perception of price/features. I bought my second computer (Suunto Cobra 2) based on features (hose AI and USB computer interface) and algorithm to match my first computer.
Much the same for me. When I'd certified, I bought some used gear. It came with a Suunto Cobra. I liked that it had hose AI and USB download and stuck with it even if I've sold most of the other gear I bought. My son had another comp, but when we got a damned good deal on a Suunto, I wanted our computers to use the same algorithm. Besides, the one-button interface on his first comp was a real PITA, so another reason for buying another comp. When a second son took his OW, I specifically looked for a Suunto for consistency.

I've never felt limited by the Suunto algorithm, since I hardly ever do more than two dives per day, and am quite diligent in avoiding sawtooth profiles, fast ascents or skipped safety stops. And I really don't like riding my NDL.
 
I'm talking about validation as being fit for purpose in the context of previously researched results with corroborating evidence to link them to a particular level of DCS risk.

R..

I thought was what I said here:

In my mind, the scientific validation of Buhlmann has been amassing large amounts of data about dives done using the algorithm, showing that is produces DCS within a tolerable limit.

And then I asked if that has not been done for the algorithm Cochran uses.
 
Well... this thread is quickly becoming the Cochran thread. It started out with the question of people's purchasing decisions and we got (as we usually do) derailed into discussing algorithms.

I linked an article in post #6 that might be worth reading if you're interested. There are some interesting observations there but they compared the algorithm to equally opaque offerings from other manufacturers and none of the dives were extreme enough to really show what happens when decompression times start to mount. I don't know what conclusions you can really take from it. The author of the article didn't draw any conclusions about whether any of the algorithms were fit for purpose in a technical context. It's a shame that they didn't use something like pure Bulhmann as a base line since we understand the limits of Bulhmann better than the rest. I actually don't know where this article originated. To me it comes across as a bachelor thesis.

As far as I know nobody out there is testing proprietary deco algorithms. Even getting comparative dive profiles for some of these algorithms based on a well researched dive like the one NEDU used to compare Bulhmann to BVM(3) could prove to be somewhat enlightening.

R..
 
  • Like
Reactions: Jay
And then I asked if that has not been done for the algorithm Cochran uses.

This is my last post on the subject:

If you read the above posts and paper uploaded, you will know that Cochran has access to all scientific studies, probabilistic model software (dive planner) and implementations from the Navy- plus the experiences of Thalmann Algorithm implementation, validation and verifications . Cochran can use all, including all man tested dives and use the same procedure as describe as “the gold standard” (NAVY) made on every validation (and it can be applied on the civilian version).
Thanks to all and safe dive!
C
 
Last edited:
This is my last post on the subject:

If you read the above posts and paper uploaded, you will know that Cochran has access to all scientific studies, probabilistic model software (dive planner) and implementations from the Navy- plus the experiences of Thalmann Algorithm implementation, validation and verifications . Cochran can use all, including all man tested dives and use the same procedure as describe as “the gold standard” (NAVY) made on every validation (and it can be applied on the civilian version).
Thanks to all and safe dive!
C
I am certain that Cochran has validated their model as well as, or better than, anyone has. That is not MY point. My point is that they have an obscure and unexplained and unacceptable set of internal routines that decide when you should do what during your ascent, whether that is convenient or even possible or not. Perhaps that works for Navy divers who really have no choice. It does not work for me, and I have a choice.
 
Which is exactly why I do not like the Cochran computers. Tell me how it works, or don't try and sell it to me.
If you read the paper mentioned above you will find that Thalmann published Fortan code as well as the details of the algorithm. Also this actual algorithm was man tested in the NEDU deep stop study that is taken to show the superiority of Buhlmann GF over VPM (neither of which was tested).

I imagine the USN will have done some work to make sure the Cochran matches their requirements.

That seems much more secure than arguing over whether it is ok to pass GF lo if you can make it to the surface without passing GF hi.
 
I did, but you need to read further than the abstract. In the conclusions, point 7.2.1 the text reads:

7.2.1. Estimated DCS risks of Air-only schedules prescribed by the Thalmann Algorithm with either VVal-18 or VVal-18M increase with increasing bottom time in each dive depth group. Unacceptably high risks are attained with sufficiently long bottom times in most groups. The maximum risk attained in tabulated VVal-18 Air schedules is about 9%, substantially lower than highest risks incurred by USN56 schedules. The maximum risk attained in tabulated VVal-18M Air schedules is about 11%.

As I'm sure you know NEDU defined a "acceptable" risk of DCS in their tests as less than 5%. Both 9% and 11% are higher than 5%.

I also said in the post there I posted the link that the report was more nuanced than I had been in my post before that. I may have been (probably was) a little selective in what I remembered from the first time I read this report.

The report seems to be saying that the algorithm is safer than the air tables, that the DCS risk is generally very low but that the decompression times calculated are very long. For example, in one of the tables (working from memory again here) there is a dive listed that show a typical model (probably the air table) give a run time for a dive of 120 minutes while the Thalmann algorithm gives a run time for the same bottom profile of about 300 minutes. That's what they mean when they are saying long decompression times. T
he conclusions about the algorithm for long dives is that the DCS risk appears to fall out at higher than 5%.

That's what I get if I read the whole article.

R..

The probabilistic estimates of DCS for Thalmann turns out to be much higher than the experimental man testing in the later NEDU deep stops study. 1.5% observed vs estimates of 4% and 6% (depening on the estimator). Since the estimates were mostly better than the old USN tables it looks reasonable successful in those terms, if not so much in time to surface.

Meanwhile, can I point out to anyone who has not met a bent person in the wild that these algorithms still bend people more than one dive in one hundred when tested. Now these divers were doing proper work at depth, but they were also youngish, fit and trained.
 
The probabilistic estimates of DCS for Thalmann turns out to be much higher than the experimental man testing in the later NEDU deep stops study. 1.5% observed vs estimates of 4% and 6% (depening on the estimator). Since the estimates were mostly better than the old USN tables it looks reasonable successful in those terms, if not so much in time to surface.

Meanwhile, can I point out to anyone who has not met a bent person in the wild that these algorithms still bend people more than one dive in one hundred when tested. Now these divers were doing proper work at depth, but they were also youngish, fit and trained.

Ken, can you link me a reference to the article? The only man tested deep stop study I know about is the one that has caused so much discussion on the internet where they tested BVM(3).

R..
 
... That seems much more secure than arguing over whether it is ok to pass GF lo if you can make it to the surface without passing GF hi.

The problem is that a model that's been rigorously validated to e.g. provide same or lower probability of DCS while reducing ascent time by 5%, if decompressing on pure O2, is only good for you if that's the dive you're aiming for. If not, you may well be better off with the model that's commonly used on a wider variety of dives. And then you can pad it by a fudge gradient and wonder if it makes any difference when you add the padding left-to-right vs right-to-left.
 
Ken, can you link me a reference to the article? The only man tested deep stop study I know about is the one that has caused so much discussion on the internet where they tested BVM(3).

R..
The same study. Thalmann was the gas content model with the shallow profile, BVM(3) was the bubble model with the deep profile. In the earlier paper you linked to it was BVM(3j that was used as one of the measures of pobability of DCS. As it turned out it over estimated that by nearly four times for the deep stop study shallow profile. So maybe the apparently high pDCS was bogus before too, also perhaps shorter deco would have worked within the 2.3% pDCS target if actually tested on people.
 
https://www.shearwater.com/products/perdix-ai/

Back
Top Bottom