The results of breaking up the image and comparing them piecewise works at a qualitative level. The actual resuls in the MSE for the entire images are harder to discern.
|v0001.remap.pgm v0002.remap.pgm||4: 8 4 2 1||81|
|v0001.remap.pgm v0002.nanmask.pgm||1: 1||60|
|v0001.remap.pgm v0002.nanmask.pgm||4: 8 4 2 1||1500|
The images are remapped so that the values of 255 become 254. This is due to the fact that 255 is used as the NaN mask value. The steps are give by the number of multiple downsampled layers and the number of interations at each layer (starting with the smallest image first proceeding to the largest image). You can see that normal orbits performs quite well for the two images. Interestingly, the image with the NaN mask used multiple layers to approximate P gave very large MSE as the algorithm went off into directions unknown but when used only once at the regular image size worked quite well.
|v0100.remap.pgm v0101.remap.pgm||4: 8 4 2 1||700|
|v0100.remap.pgm v0101.nanmask.pgm||1: 1||570|
|v0100.remap.pgm v0101.nanmask.pgm||4: 8 4 2 1||1100|
On a different set of images it is pretty much the same result. It should be noted that one benefit of NaN masking before is that the actual estimation time is shortened. But this is at the expense of calculating the NaN mask. Also the time to estimate on the downsampled images is not very long.
I also know there is a limitation in the method that I have used. It is not very feasible to expect calculate the parameters P well with such small images. To rememdy this I would implement the above but using the method of robust estimation on the entire image. Thus I have demonstrated that masking out the regions of high MSE do help the calculations of the projective parameters P. However, further work needs to be done on how to incorporate this into the current videoOrbits algorithm in order for them to work together efficiently.