How to generate composite texture file for mesh from multiple scalar fields
Posted: Tue Jun 25, 2024 11:22 am
Hi, I am trying to make custom textures for my meshes. My goal is to make grayscale textures that label the meshes according to multiple notions:
In some cases I start from meshes that come from photogrammetry via WebODM, and in some cases I start from LiDAR point clouds, but I would like to have a consistent method for automatically processing both. In both cases, I need to end up with a mesh that has an RGB texture representing its actual appearance, and the custom texture I describe above. The pointclouds from WebODM (from my photogrammetry surveys) are very fuzzy compared to the real point clouds from LiDAR surveys, so it is difficult to get good results when working on them directly. I have found that it might be better to run calculations on the point clouds sampled from the meshes that WebODM exports, since the meshes themselves look more realistic. Similarly, the LiDAR point clouds are from drone surveys, and they have some overlap areas that have twice the normal point density, which affects especially roughness and planarity. I have found that it might be better to do a Poisson reconstruction, then sample the resulting mesh, and then run calculations on the resulting pointcloud.
Is it good practice/reasonable to create derived pointclouds in this way? Are there downsides?
- inclination (presumably this can come from dip/dip direction)
- planarity
- roughness
In some cases I start from meshes that come from photogrammetry via WebODM, and in some cases I start from LiDAR point clouds, but I would like to have a consistent method for automatically processing both. In both cases, I need to end up with a mesh that has an RGB texture representing its actual appearance, and the custom texture I describe above. The pointclouds from WebODM (from my photogrammetry surveys) are very fuzzy compared to the real point clouds from LiDAR surveys, so it is difficult to get good results when working on them directly. I have found that it might be better to run calculations on the point clouds sampled from the meshes that WebODM exports, since the meshes themselves look more realistic. Similarly, the LiDAR point clouds are from drone surveys, and they have some overlap areas that have twice the normal point density, which affects especially roughness and planarity. I have found that it might be better to do a Poisson reconstruction, then sample the resulting mesh, and then run calculations on the resulting pointcloud.
Is it good practice/reasonable to create derived pointclouds in this way? Are there downsides?