How does CloudCompare determine initial camera placement and zoom?

Feel free to ask any question here
Post Reply
coop
Posts: 4
Joined: Sat Mar 23, 2024 2:31 am

How does CloudCompare determine initial camera placement and zoom?

Post by coop »

My LIDAR devices can literally take days to scan the bit of scenery I point it to. Every once in a while, I convert the polar coordinates CSV file it generates into an XYZ file with my software and open it in CloudCompare. And each and every time, I have to go into the Camera Settings and manually enter 0 in all the fields, and a FOV value that adequately blows up the 2D rendering for my display, because I want to see the point cloud from the device's point of view but CloudCompare tries to be smart and - apparently - calculates a camera position and zoom based on the actual points in the cloud.

I can force CloudCompare to center the camera in X and Y by making my converter inject 8 fake points into the cloud at those 8 coordinates:

-max_x, -max_y, -max_z
+max_x, -max_y, -max_z
-max_x, +max_y, -max_z
+max_x, +max_y, -max_z
-max_x, -max_y, +max_z
+max_x, -max_y, +max_z
-max_x, +max_y, +max_z
+max_x, +max_y, +max_z

That forces CloudCompare to calculate a bounding box that has the device's origin at the center of the bounding box. The zoom level becomes twice as high (because that artificial bounding box is over twice as large as the real one, being that 4 or those artificial points are "behind" the LIDAR) but at least all I have to do is zoom in to get a perfectly-placed render of my partial scan - and hold shift to slow down the zooming action because the zooming step too has become too large.

I was hoping to still get the correct camera placement without the crazy zoom by placing the 4 "back" points on the XY plane at zero depth like this:

-max_x, -max_y, -max_z
+max_x, -max_y, -max_z
-max_x, +max_y, -max_z
+max_x, +max_y, -max_z
-max_x, -max_y, 0
+max_x, -max_y, 0
-max_x, +max_y, 0
+max_x, +max_y, 0

If I do that, the initial camera angles are okay, but for some reason half of the render is off the screen and I have to slide it back into view.

Second issue: if I want to look in the direction my LIDAR is scanning, I have to set all the depths to negative values (on the Z axis). Otherwise the initial camera placement is in the cloud looking back at the device.

How come? It seems counter-intuitive that negative Z values are in front of the camera rather than positive values.

EDIT: Actually nevermind, it makes perfect sense: since Z is normally the altitude and the camera is looking down by default, it's like my samples were "below ground".

Mind you, it's not a big issue. I only use that coordinate system to do a quick preview while the scan is going. When it's done, I flip to the XY plane being the plane parallel to the ground and Z the altitude of the points in my converter, as it should be, and I take some time to flip the X axis and position the camera manually in CloudCompare. I just want to minimize the time it takes to preview the scan.
daniel
Site Admin
Posts: 7707
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: How does CloudCompare determine initial camera placement and zoom?

Post by daniel »

I was going to say, only the zoom/scale is adjusted to the point cloud extents, but not the orientation.

Have you tried to set your camera once, and then save the 'viewport' as an object? CTRL+V (or Display > Save viewport as object).

Then you can save this viewport in a dedicated BIN file, and reload it anytime you want to restore the correct viewpoint (you'll just have to click on the 'Apply' button in its properties) .
Daniel, CloudCompare admin
coop
Posts: 4
Joined: Sat Mar 23, 2024 2:31 am

Re: How does CloudCompare determine initial camera placement and zoom?

Post by coop »

daniel wrote: Tue Mar 26, 2024 6:53 pm Then you can save this viewport in a dedicated BIN file, and reload it anytime you want to restore the correct viewpoint (you'll just have to click on the 'Apply' button in its properties) .
Yes I'm aware you can save viewports. In fact, I have a .bin full of useful viewports, because that LIDAR of mine is a test device that's mounted at a fixed point looking at the exact same scenery, so the views are always identical from one scan to the next.

But when I just want to check the completion of the scan, and whether something went horribly wrong and I can stop it rightaway without waiting hours or days for it to complete, it's not as quick as loading the XYZ with the camera already properly oriented, doing a quick zoom in and reckoning quickly what percentage of scanlines have been completed.
Post Reply