Hi all,
indeed we have a beta version of a code dedicated to cloud2cloud comparison that solves many of the problems discussed in the forum (and in general) when trying to compare point clouds without knowing the normal and without doing any post-treatment on the data (meshing, gridding that can be a nightmare for the kind of surfaces I'm working on (cliffs, vegetated surfaces, 3D complex river beds...). The software also caters for the precision I'm aiming at (i.e. sub-cm level of detection) and factor in the local roughness of the surface to determine whether a change is statistically significant or not. I'm currently writing the paper that present the method we developped with a colleague (Nicolas Brodu), and I'll present it at the European Geophysical Union meeting in Vienna in april.
In a nutshell :
(i) normal computation : we locally model the cloud by a plane and compute the normal to this plane (nothing new there). The subtlety is that this process is done over a range of spatial scale around the considered point and the most planar one is chosen as the best representative scale. You can obviously fix this scale for all the calculation. In our application this a critical aspect because we're looking at surfaces with significant surface roughness at small scale (say a few cm to 50 cm). Only at scales larger than 1 to 5 m do you get a normal orientation that is meaningfull for our specific application. This is also very interesting for small scale measurement where you can define the scale slighlty larger than the typical instrument noise on a flat surface and avoid normal orientation "flickering" due to scanner noise.
(ii) orienting the normal : we impose a series of reference points towards which the normals are oriented. Although this seems time consuming, this is actually easy to do. There are much more complex way to compute and orient the normal, but for our application this is good enough.
(ii) surface difference : we introduce a second scale here generally smaller than the scale at which you compute the normal (but which can be the same). Along the normal we compute the average distance between the 2 point clouds at the considered scale on each cloud. We also record the local roughness (and the number of points) on the 2 clouds which is an important parameter to know whether a measured change is statistically significant or not. If you're comparing planar surfaces, by doing this approach you can significantly reduce the uncertainty related to instrument noise (which is normally distributed when the incidence angle is close to 90°). Note also that because we're computing the difference along the normal (and not looking at the closest point), no result is generated when there is no intercept with the 2nd cloud : this way you don't have to "cut" the clouds so that they are occupying the same space. You can also have holes in one of the point cloud (visibility issues, vegetation...) that won't pollute the results with artificial distance measurements. This partially solve the visibility issue. There are also other advantages in doing a multi-scale approach for the normal and distance measurements, but this would be too long to explain here...
Note that we also use here the notion of core points introduced in our recent paper on point cloud classification :
http://www.sciencedirect.com/science/ar ... 1612000330 (you'll have to read it to know what this means ;-) )
As for our software on point cloud classification, this software will be released as a free software when the paper will be close to be accepted and the software further bug proofed and optimized (you can easily imagine that computing the normal at large scale takes a long, long time....). Note that i'm still a big fan of the C2C function of cloudcompare which works well within the scope designed by Daniel (unsigned change detection on densely sampled surfaces with small scale planarity). It's extremely fast to check rapidly the data, but I absolutely need signed difference as well as a proper treatment of local roughness for the level of change detection. And for data visualization, I'm yet to find something better than CC.
Attached is an example of 3D point cloud comparison of a 500 m reach of a meandering river (the bed is under water) with vegetation classification (in green), and 3D point cloud comparison. The normal is computed at 10 m, and the surface difference done at 20 cm. Core points (where we actually compute the distance) are every 20 cm, while the raw data is down to 1 cm point spacing (and obviously varying accross the scene). I'm sure you'll have recognized your favorite visualization software....I've chosen a scale from -1 to 1 m, but we actually have a registration error between the 2 clouds of about 2.5 mm and can measure really fine surface change.