aricooperdavis
Moderator
I stumbled across an interesting piece of work this morning - a deep learning approach to "enhancing" photographs. This might just sound like an Instagram filter, but there's quite a bit more to it that that!
This is a technique which has used identical photographs taken on different cameras to teach a machine learner the kind of mistakes that low-res phone cameras make. After a lot of training that machine learner is now able to start with a photo taken on a low-res phone and spit out its best guess at what the photo would look like if taken on a DSLR. This includes lighting and colour balancing adjustments, but goes as far as de-blurring and de-noising.
My first thought when I saw this was to try it on some cave photos. The machine learner obviously hasn't been trained in caves, so it's likely to make some colour mistakes (most of its training data is from sunlit photos which are likely to be far warmer than underground shots) amongst other things, but it gave it a good shot.
This is a photo that I happened to have on my phone of a little bit of phreatic development between pitches in a cave being pushed on the Dachstein expedition. It was taken on my Huawei P10 Lite, which has an alright camera, but it's nothing special, and it makes a good test subject.
You can't see much improvement in detail, but it's done a good job at warming the colours up and removing some noise. It's also lifted some detail out of the shadows, although I quite liked the contrast that these shadows produced in the original. It's also stuck a weird red stripe at the top, which I can only assume is a bit of a bug.
I would be interested to see the applications of this kind of technology to help combat some of the issues often faced underground with lighting, colour, and noise. I don't think anyone will ever get the appropriate training data (as it would be a complete nightmare), but I'm certainly going to play with sticking some of my crappy cave photos through it to see how they turn out.
You can try your own photos online or have a bit more of a play with what's going on under the hood by running it yourself.
This is a technique which has used identical photographs taken on different cameras to teach a machine learner the kind of mistakes that low-res phone cameras make. After a lot of training that machine learner is now able to start with a photo taken on a low-res phone and spit out its best guess at what the photo would look like if taken on a DSLR. This includes lighting and colour balancing adjustments, but goes as far as de-blurring and de-noising.
My first thought when I saw this was to try it on some cave photos. The machine learner obviously hasn't been trained in caves, so it's likely to make some colour mistakes (most of its training data is from sunlit photos which are likely to be far warmer than underground shots) amongst other things, but it gave it a good shot.
This is a photo that I happened to have on my phone of a little bit of phreatic development between pitches in a cave being pushed on the Dachstein expedition. It was taken on my Huawei P10 Lite, which has an alright camera, but it's nothing special, and it makes a good test subject.
You can't see much improvement in detail, but it's done a good job at warming the colours up and removing some noise. It's also lifted some detail out of the shadows, although I quite liked the contrast that these shadows produced in the original. It's also stuck a weird red stripe at the top, which I can only assume is a bit of a bug.
I would be interested to see the applications of this kind of technology to help combat some of the issues often faced underground with lighting, colour, and noise. I don't think anyone will ever get the appropriate training data (as it would be a complete nightmare), but I'm certainly going to play with sticking some of my crappy cave photos through it to see how they turn out.
You can try your own photos online or have a bit more of a play with what's going on under the hood by running it yourself.