Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I have it on good authority that physicists throw away almost every data point coming from the sensors of instruments like particle colliders.
LOL. And I throw away most of the "facts" I read on ScubaBoard. What's your point?
 
LOL. And I throw away most of the "facts" I read on ScubaBoard. What's your point?

Actually to provide a concrete example/illustration to your "not how often it is written to memory and stored/logged". You're welcome.
 
As a caveat, I have no direct knowledge of how any other dive computer calculates, however I expect most are similar to the approach we take. I may be able to offer some clarification.

Sampling frequency is how often you take pressure readings, gas mix, etc. and calculate the algorithm results. For most computers I expect that will be in the range of every second to at most every few seconds.

The processing required to calculate algorithms like Buhlmann or the “folded” RGBM (those calculations actually being Haldanian) is minimal. Processor speed is unlikely to be a limiting factor. When you get to fully iterative bubble models, like “full” RGBM or VPM, then the processing required can be more significant, at least for longer profiles with a lot of deco time. But I’d be surprised if there are computers now that don’t run algorithm calculations at essentially the same rate at which they sample data, i.e. every second, or two, or three. The actual sampling process could involve multiple readings that get compared as a check on sensor accuracy. They could be averaged, or outlier readings discarded- how the raw sensor data is handled will depend on the computer’s firmware. What I’m referring to is the value the computer takes as a depth sample after those checks are done, and that’s most likely happening every second or two. Definitely not 20 or 30 seconds. A diver could travel quite a distance in 20 seconds.

But the same term, sampling frequency, is also often (confusingly) used to describe the rate at which dive data is written to the computer’s memory- what we call the data storage interval. CadiveOz has this right in post # 78. For many computers this is adjustable. This only has to do with the granularity of the saved data- and the amount of memory required to store a dive. It has nothing to do with algorithm calculations during a dive. Exactly how each computer handles this saved data may vary- for instance we save an average of the depth readings over the selected interval, but always store the last, lowest gas pressure reading with the segment.

There are a lot of assumptions that need to me made when any algorithm is implemented In a dive computer- ascent rate, for instance. There can also be differences in how various processors handle the calculations with rounding or floating point operations. All of these can mean variations in the results of even supposedly identical algorithms.

-Ron
 
@RonR ,
Thanks for your viewpoint and clarification. I like your definitions and will use them going forward. Sampling frequency/rate will refer to how often sensor readings are taken and calculated while data storage interval refers to the rate at which dive data is written to the computer’s memory. I'll personally drop the recalculation frequency term to avoid any confusion. Thanks again. :)
 
The processing required to calculate algorithms like Buhlmann or the “folded” RGBM (those calculations actually being Haldanian) is minimal. Processor speed is unlikely to be a limiting factor. When you get to fully iterative bubble models, like “full” RGBM or VPM, then the processing required can be more significant, at least for longer profiles with a lot of deco time. But I’d be surprised if there are computers now that don’t run algorithm calculations at essentially the same rate at which they sample data, i.e. every second, or two, or three.

If your microprocessor has IDLE/HLT/WFI you can save a lot of battery power not recalculating needlessly. If it doesn't then it's moot, and I've no idea if any processors used in real dive computers do.

Underlying gas dynamics is ass-u-me-d to be log(2) loading. 1st half-time tissue gets to 50% of ambient pressure, 2nd half-time adds 25%, and so on to 6 half-times where the tissue is "practically saturated"
1024px-Tissue_half_times_%281%29.svg.png


The part most affected by short time intervals is in the bottom-left corner.

The fastest tissue compartment is most sensitive to short time quanta. Taking the Cressi/ZH-L12 fastest TC with half-time of 2.5 minutes, or 150 seconds. Recalculating every second means that in the bottom left corner you're tracking the loading down to 1/150th of the 50% -- progressively less as your tissue takes on more gas. Which only really makes sense if your CPU consumes as much power in IDLE loop as it does calculating. Because the amount of gas taken on during that interval will make no appreciable difference to your deco/ndl calculation.

Every 20 seconds would give you around 1/8th of 50% in the bottom left corner, which may be worth tracking, even up to T3.

(I do have too much short idle time intervals on my hands this week. :wink:
 
If you take multiple samples in one update,, you can say going up or down within the update period. You can also avg 20 samples and use that for the update every 20 seconds. Doing that would dampen out the other wise larger variations of moving your arm and falling out of a stop depth band. A 30 +-1 ft stop and tide or wave surge of 2 ft would average out to a more constant avg depth and not generate an incomplete stop.
 
If your microprocessor has IDLE/HLT/WFI you can save a lot of battery power not recalculating needlessly. If it doesn't then it's moot, and I've no idea if any processors used in real dive computers do.
...<snip>
Algorithms aren’t monolithic in their implementation. There are choices about how to run different aspects, and when and how often to display information. You definitely want to use the processor efficiently to save battery power- that is in fact a huge issue. I think it’s safe to assume that all micros used in dive computers have multiple options for low power modes they can be put into. But dive computers are real time systems, and you really need to keep the tissues updated in real time along with depth. However that’s only part of the algorithm. This does not mean you are recalculating schedules or no-stop times that frequently. I don’t think I was being clear about that.

As a for instance, whenever a deco schedule has been generated, rather than constantly recalculating, we monitor the adherence fo the diver to the schedule. If the schedule is followed precisely that part of the algorithm would not be recalculated. On the other hand, if they don’t appear to be ascending to a 1st stop, that would trigger a recalculation and a new schedule at some point. When depends on the tissue states. Ditto if a stop is overstayed or they re-descend after starting a profile. This is essentially another algorithm (or algorithms) that determines how the basic decompression algorithm is implemented in real time. What exactly triggers recalculation under what circumstances is a big part of dive computer design. Obviously there is room for a lot of variation in how these details are handled by different computers, even those running the same algorithm. Those differences can give different results. With the same algorithm and settings you would expect the variations to be minor. But, particularly when looking at no stop times on shallower dives, minor variations in implementation can result in many minutes of difference in no-stop times. That can look like a big difference, even if the underlying tissue calculations are very close. That’s because computers treat deco vs. no-stop as a binary switch, when in reality it is a long, fuzzy slope.
 
If you take multiple samples in one update,, you can say going up or down within the update period. You can also avg 20 samples and use that for the update every 20 seconds. Doing that would dampen out the other wise larger variations of moving your arm and falling out of a stop depth band. A 30 +-1 ft stop and tide or wave surge of 2 ft would average out to a more constant avg depth and not generate an incomplete stop.

You are bringing up an important point. There are many considerations like this that have to be taken into account in a dive computer. Our perspective is that it’s best to measure and track depth and calculate tissue states as accurately as possible, in real time. But it is necessary to buffer and interpret what gets displayed through the interface to the diver- that’s 90% of what goes into designing a dive computer. The algorithm is trivial in comparison.

Determining the difference between just raising your arm and actually violating a stop is something all computers have to do. Most do it by not logging a problem until the violation has persisted for some specific time, or exceeds some arbitrary value. In the Cobalt we do this sort of thing with an algorithm that looks at the stop depth, the distance above the stop as a % of the distance to the surface, the stop time remaining, and the time spend above the stop depth. The results determine if you get a deco violation, a “go down” warning, or nothing. The same kind of issue arises in determining when a dive starts- determining the difference between an actual descent and just reaching down to adjust a fin strap is something we use another algorithm for. Most computers use an arbitrary depth trigger. But all computers must deal with these kinds of issues, and cumulatively they can make a difference in the results from even supposedly identical algorithms.

-Ron
 
If your microprocessor has IDLE/HLT/WFI you can save a lot of battery power not recalculating needlessly. If it doesn't then it's moot, and I've no idea if any processors used in real dive computers do.

Underlying gas dynamics is ass-u-me-d to be log(2) loading. 1st half-time tissue gets to 50% of ambient pressure, 2nd half-time adds 25%, and so on to 6 half-times where the tissue is "practically saturated" View attachment 502152

The part most affected by short time intervals is in the bottom-left corner.

The fastest tissue compartment is most sensitive to short time quanta. Taking the Cressi/ZH-L12 fastest TC with half-time of 2.5 minutes, or 150 seconds. Recalculating every second means that in the bottom left corner you're tracking the loading down to 1/150th of the 50% -- progressively less as your tissue takes on more gas. Which only really makes sense if your CPU consumes as much power in IDLE loop as it does calculating. Because the amount of gas taken on during that interval will make no appreciable difference to your deco/ndl calculation.

Every 20 seconds would give you around 1/8th of 50% in the bottom left corner, which may be worth tracking, even up to T3.

(I do have too much short idle time intervals on my hands this week. :wink:


this chart is not isolated to just off gassing it is a common chart for IE radioative material decay. electronic reactive component charge and discharge rates. It is a curve that if you under stand its function and its appication to deco, it is the basis that explains nearly all the process.
 
Computer people often manage to make the curve turn down on itself. For instance, when you add more CPUs/cores, you get logarithmic performance growth, but you also have to synchronize memory accesses, move the processes around (or "pin to core"), etc., and at some point that overhead starts eating away at your performance gains.

"Adding more people to project tha's late only makes it later" is one of the oldest rules in the software engineering rulebook -- the curve turns back toward zero.
 
https://www.shearwater.com/products/peregrine/

Back
Top Bottom