How does CloudCompare determine initial camera placement and zoom?
Posted: Tue Mar 26, 2024 5:19 am
My LIDAR devices can literally take days to scan the bit of scenery I point it to. Every once in a while, I convert the polar coordinates CSV file it generates into an XYZ file with my software and open it in CloudCompare. And each and every time, I have to go into the Camera Settings and manually enter 0 in all the fields, and a FOV value that adequately blows up the 2D rendering for my display, because I want to see the point cloud from the device's point of view but CloudCompare tries to be smart and - apparently - calculates a camera position and zoom based on the actual points in the cloud.
I can force CloudCompare to center the camera in X and Y by making my converter inject 8 fake points into the cloud at those 8 coordinates:
-max_x, -max_y, -max_z
+max_x, -max_y, -max_z
-max_x, +max_y, -max_z
+max_x, +max_y, -max_z
-max_x, -max_y, +max_z
+max_x, -max_y, +max_z
-max_x, +max_y, +max_z
+max_x, +max_y, +max_z
That forces CloudCompare to calculate a bounding box that has the device's origin at the center of the bounding box. The zoom level becomes twice as high (because that artificial bounding box is over twice as large as the real one, being that 4 or those artificial points are "behind" the LIDAR) but at least all I have to do is zoom in to get a perfectly-placed render of my partial scan - and hold shift to slow down the zooming action because the zooming step too has become too large.
I was hoping to still get the correct camera placement without the crazy zoom by placing the 4 "back" points on the XY plane at zero depth like this:
-max_x, -max_y, -max_z
+max_x, -max_y, -max_z
-max_x, +max_y, -max_z
+max_x, +max_y, -max_z
-max_x, -max_y, 0
+max_x, -max_y, 0
-max_x, +max_y, 0
+max_x, +max_y, 0
If I do that, the initial camera angles are okay, but for some reason half of the render is off the screen and I have to slide it back into view.
Second issue: if I want to look in the direction my LIDAR is scanning, I have to set all the depths to negative values (on the Z axis). Otherwise the initial camera placement is in the cloud looking back at the device.
How come? It seems counter-intuitive that negative Z values are in front of the camera rather than positive values.
EDIT: Actually nevermind, it makes perfect sense: since Z is normally the altitude and the camera is looking down by default, it's like my samples were "below ground".
Mind you, it's not a big issue. I only use that coordinate system to do a quick preview while the scan is going. When it's done, I flip to the XY plane being the plane parallel to the ground and Z the altitude of the points in my converter, as it should be, and I take some time to flip the X axis and position the camera manually in CloudCompare. I just want to minimize the time it takes to preview the scan.
I can force CloudCompare to center the camera in X and Y by making my converter inject 8 fake points into the cloud at those 8 coordinates:
-max_x, -max_y, -max_z
+max_x, -max_y, -max_z
-max_x, +max_y, -max_z
+max_x, +max_y, -max_z
-max_x, -max_y, +max_z
+max_x, -max_y, +max_z
-max_x, +max_y, +max_z
+max_x, +max_y, +max_z
That forces CloudCompare to calculate a bounding box that has the device's origin at the center of the bounding box. The zoom level becomes twice as high (because that artificial bounding box is over twice as large as the real one, being that 4 or those artificial points are "behind" the LIDAR) but at least all I have to do is zoom in to get a perfectly-placed render of my partial scan - and hold shift to slow down the zooming action because the zooming step too has become too large.
I was hoping to still get the correct camera placement without the crazy zoom by placing the 4 "back" points on the XY plane at zero depth like this:
-max_x, -max_y, -max_z
+max_x, -max_y, -max_z
-max_x, +max_y, -max_z
+max_x, +max_y, -max_z
-max_x, -max_y, 0
+max_x, -max_y, 0
-max_x, +max_y, 0
+max_x, +max_y, 0
If I do that, the initial camera angles are okay, but for some reason half of the render is off the screen and I have to slide it back into view.
Second issue: if I want to look in the direction my LIDAR is scanning, I have to set all the depths to negative values (on the Z axis). Otherwise the initial camera placement is in the cloud looking back at the device.
How come? It seems counter-intuitive that negative Z values are in front of the camera rather than positive values.
EDIT: Actually nevermind, it makes perfect sense: since Z is normally the altitude and the camera is looking down by default, it's like my samples were "below ground".
Mind you, it's not a big issue. I only use that coordinate system to do a quick preview while the scan is going. When it's done, I flip to the XY plane being the plane parallel to the ground and Z the altitude of the points in my converter, as it should be, and I take some time to flip the X axis and position the camera manually in CloudCompare. I just want to minimize the time it takes to preview the scan.