TheBitterEnd said:
Interesting stuff but the facebook post is a bit light on detail, I'd like to know which softwares he is using? I've had a go with similar optical flow and structure from motion in the past in caves but found that the softwre got confused due to moving light sources. I guess things move on and he has some pretty good lighting
Would be interesting to see which software this person used, yes.
I had a go at doing something like this a couple of weekends ago at the Alderley Edge open weekend, using VisualSFM to create a point-cloud from a GoPro video (or rather extracted frames from a GoPro video).
From first look, it worked well where we had part of the mine lit-up, and it did pretty well in some other bits of the mine where I was relying on my own light sources - even though the lighting was changing. The difficulty I found was that in the bit where I was providing the lighting the GoPro stills showed some motion blurring - presumably because it was still quite dark. This meant that some of the features it was trying to match weren't very distinct. The software did get a bit confused at some points due to this and started creating a couple of separate models not linked by any common points.
I was using the mode of looking for common features on one photo against all other photos - and this starts to take a very long time after you've got a few hundred frames extracted. But you have too big a time gap between extracted frames on the video and you start getting lower number of matched points. I understand that it's possible to look only at n photos either side of the current one which should make the run times more sensible - but not sure how that would deal with a situation where your video frames came back to the same point - so want to match the features at the start and end of the video.