Interesting development for underwater imagery / photography

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I don't see what all the fancy math and calculations provide that we can't already do -- with just one image -- and we don't have to have a static image that we must look at from various distances.

I'm kinda out of my wheelhouse on the marine side of it here but I think the paper mentions this:

Previously, it was assumed that β D c = β B c , and that these coefficients had a single value for a given scene [9], but in [1] we have shown that they are distinct, and furthermore, that they had dependencies on different factors.​

And as DoctorMike said it's for processing lots of pictures on computer for research, so multiple images shouldn't be a problem.

I'm still trying to figure out if those 1,100 images with the color palette are just for checking accuracy or if they're training a model on it. Sea-Thru: A Method for Removing Water from Underwater Images | Hacker News has some discussion by people more attuned to the AI/vision aspects of this.
 
I'm kinda out of my wheelhouse on the marine side of it here but I think the paper mentions this:

Previously, it was assumed that β D c = β B c , and that these coefficients had a single value for a given scene [9], but in [1] we have shown that they are distinct, and furthermore, that they had dependencies on different factors.​

And as DoctorMike said it's for processing lots of pictures on computer for research, so multiple images shouldn't be a problem.

I'm still trying to figure out if those 1,100 images with the color palette are just for checking accuracy or if they're training a model on it. Sea-Thru: A Method for Removing Water from Underwater Images | Hacker News has some discussion by people more attuned to the AI/vision aspects of this.
Your quote refers to the forward problem...trying to use inherent optical properties to clear up an image.
When we White Balance, we are doing the inverse problem: we already know the answer (the point we are clicking on is white, or 18% gray), and we are changing the image to use that information.

I'm sure lots of data crunching and clever AI can do as well as a good photographer white-balancing.... :) and using only one image.
 
She's doing something much more in depth that what you can do in photoshop. She uses a video technique to get distance information for every pixel in the image, then corrects every pixel for the lost color information. You could do something similar with a 3d camera setup. She can't use a strobe for this because it would add more color information that she is already adding via the algorithm.
 
Very cool. However, it seems for it to work, you still need lighting. In her photos, she has natural lighting. If you are deep enough, where will that come from? You still need a source, even if to focus and take a pic.
 
Very cool. However, it seems for it to work, you still need lighting. In her photos, she has natural lighting. If you are deep enough, where will that come from? You still need a source, even if to focus and take a pic.

You just need a Titanic-size camera and exposure time of an hour or five.
 
Disregard the thread I started. I missed that there was already one on this exact topic.

To the Mods. If possible, please delete this thread since there is nothing to be gained by it.

Thanks,
Hoag
 
Disregard this thread. I missed that there was already one on this exact topic.

To the Mods. If possible, please delete this thread since there is nothing to be gained . . .
Didn't Delete it, just merged the two so all comments are in one.
 
There is an iOS app called Dive+ that does a great job of color correcting my underwater photos in my iPad photo library, but I've not figured out how to use it to correct my GoPro videos.
Except it doesn’t work on an IPad any more. Apparently the developers are aware of this and their suggested fix is to ask you to switch to IPhone. Why in the world would I switch to a smaller screen. I’m looking for something else to replace Dive+
 
https://www.shearwater.com/products/peregrine/

Back
Top Bottom