Hello!
I'm using CC to process some ALS data. I've started going step by step and now I'm advancing on...
My data:
ALS with point spacing of around 0,25m and density of 20-25 points / sq m on a high steep terrain partially covered with high vegetation.
My goal:
Ground classification done by vendor is messed up so my goal is to generate a new classification by testing different algorythms and by varying their input parameters and generating ground point clouds.
1) Did a merge on tiles available, removed xyz identical points (mainly from border) and filtered SOR (10 points and nSigma of 3). Results on a aprox. 3,300,000 points.
2) Rasterize tool on a grid of 5x5m to get the lower points, near the center of the grid (smaller grids returned too much points from vegetation) with all statistics. So this was my hard approach to get ground points.
3) This point cloud turned to be my reference cloud and I've also generated a mesh from PoissonRecon plugin (octree depth: 10, point weight: 0, samples per node: 1, border: Neumman) and filtered it by the density scalar field to approximate it to the reference point cloud.
4) Distances calculations. Using a point cloud generated by MCC algorythm with 311,000 points (from a 3,300,000 original cloud):
-- C2C by local model (quadric, radius of 10): returned the distances, but no signed distances. Got a glimpse of the distances and this might be a good orietation for the other ways of calculation distances. 18.93m was the greates distance.
-- C2M got the signed distances and great visualization. This I can understand better as the points get lower from the surface or above it. I managed to create a color legend that shows in green the variation of 0.5m around the surface (0m) and gets another color by 1.0m and extreme colors by 2.0m I'm interested in the most points around +/- 0.5m of the reference (cloud or surface). 19.34 was the greatest positive (above ground) distance calculated. An increase of 2.17% for the same cloud. <br/>
5)Now I'm on to M3C2. My initial setup is:
Cloud #1: reference ground cloud (around 5m spacing)
Cloud #2: new ground classification to test
Scales: guess params results: normals of 25, projection of 25, max depth of 30. I guess these are due the approximate 5m spacing of the reference cloud. Check on compute normal on core points. I've setted as D= 26, d= 12 and depth of 50.
Core points: Use Cloud #1.
Normals: Multi-scale min=6, step=2, max =100, check on use core points for normal calculation. Should I check on Vertical for using ALS data?
Orientation: Z as I'm working on ALS data (?).
Advanced: Check on "Do not use multiple pass for depth"
Output: Project core points on Cloud #2. Check on export std devs and density at projection scale.
Questions:
A) On normals, should I check on Vertical for using ALS data? Orientation I've setted Z already.
B) What's the impact on the M3C2 by using two different clouds (by density, spacing and number of points).
C) Does anyone have tips on usage of M3C2?
D) Guess params is putting normal scale D and projection scale d as equal numbers. Swaping cloud 1 and 2 change the ratio between D and d... Any comments on how D and d ratio should br setted?
E) Any suggestions on how to obtain a ground reference to check my other ground classifications?
Excuse my english...
Thanks a lot for any comments, replies or suggestions!
M3C2 - Point cloud comparison
-
- Posts: 4
- Joined: Mon Apr 15, 2019 8:27 pm
M3C2 - Point cloud comparison
- Attachments
-
- ALS data
- ALS_merged_removepoints_SOR10_3.png (539.2 KiB) Viewed 14952 times
Last edited by lfsantosgeo on Wed May 29, 2019 8:12 pm, edited 2 times in total.
--
Luiz Fernando
Luiz Fernando
-
- Posts: 4
- Joined: Mon Apr 15, 2019 8:27 pm
Re: M3C2 - Point cloud comparison
- Attachments
-
- Reference surface by PoissonRecon
- poissonrecon_depth10_weight0_samples1_neumann.png (184.91 KiB) Viewed 14950 times
--
Luiz Fernando
Luiz Fernando
Re: M3C2 - Point cloud comparison
A) yes, especially if you expect the changes to have occurred vertically
B) actually, subsampling the core points is just optional and meant to speed up the calculations. Once you are happy with the result you should go with the full cloud resolution (or use spatial subsampling with a small radius)
C and D) the best idea is still to read the wiki AND the article!
E) try the qCSF plugin (https://www.cloudcompare.org/doc/wiki/i ... F_(plugin))
B) actually, subsampling the core points is just optional and meant to speed up the calculations. Once you are happy with the result you should go with the full cloud resolution (or use spatial subsampling with a small radius)
C and D) the best idea is still to read the wiki AND the article!
E) try the qCSF plugin (https://www.cloudcompare.org/doc/wiki/i ... F_(plugin))
Daniel, CloudCompare admin