Calibrate + PointCloud Generation (Receiver Side)

New Modification and Success! (Mar 27 2024)

  1. Generate necessary calibration and transformations
    1. (slave depth to color / main depth to color) calibration + transformation
    2. (slave color to main color) transformation → using checkerboard
// create extrinsics homogenous matrix
cv::Matx33d rotation_matrix = tr_secondary_color_to_main_color.R;
cv::Vec3d translation_matrix = tr_secondary_color_to_main_color.t / 1000.0;
cv::Matx44d homo_matrix(rotation_matrix(0, 0), rotation_matrix(0, 1), rotation_matrix(0, 2), translation_matrix(0),
                        rotation_matrix(1, 0), rotation_matrix(1, 1), rotation_matrix(1, 2), translation_matrix(1),
                        rotation_matrix(2, 0), rotation_matrix(2, 1), rotation_matrix(2, 2), translation_matrix(2),
                        0.0, 0.0, 0.0, 1.0);
Eigen::Matrix4d homogenous_matrix = convertCVMatxToEigen(homo_matrix);
  1. Generate k4a::image pointcloud
// make a point cloud in each color camera's perspective
k4a::image main_xyz_image = main_depth_to_main_color.depth_image_to_point_cloud(main_depth_in_main_color, K4A_CALIBRATION_TYPE_COLOR);
k4a::image secondary_xyz_image = secondary_depth_to_secondary_color.depth_image_to_point_cloud(secondary_depth_in_secondary_color, K4A_CALIBRATION_TYPE_COLOR);
  1. Apply transformation on generated slave pointcloud to match master perspective
// Downsample Point Cloud for Transformation
float voxel_size = 0.01;
auto main_pc_down = main_pc->VoxelDownSample(voxel_size);
auto sub_pc_down = sub_pc->VoxelDownSample(voxel_size);

// Estimate normals with full resoltuion pcd
const double normal_radius = voxel_size * 2.0;
open3d::geometry::KDTreeSearchParamHybrid normals_params(normal_radius, 30);
const bool fast_normal_computation = true;
main_pc_down->EstimateNormals(normals_params, fast_normal_computation);
sub_pc_down->EstimateNormals(normals_params, fast_normal_computation);

// Apply transformation to match sub_pc and main_pc
sub_pc->Transform(homogenous_matrix);
  1. Apply Colored ICP Algorithm to refine transformation
// Preprocess the point clouds (e.g., downsampling, estimating normals)
double voxel_size = 0.04;
auto main_temp = main_pc->VoxelDownSample(voxel_size);
auto sub_temp = sub_pc->VoxelDownSample(voxel_size);

main_temp->EstimateNormals(open3d::geometry::KDTreeSearchParamHybrid(voxel_size * 2, 30));
sub_temp->EstimateNormals(open3d::geometry::KDTreeSearchParamHybrid(voxel_size * 2, 30));

// refine the transformation using color
icp_result = open3d::pipelines::registration::RegistrationColoredICP(
				*sub_temp, *main_temp, voxel_size, Eigen::Matrix4d::Identity(),
				open3d::pipelines::registration::TransformationEstimationForColoredICP(),
				open3d::pipelines::registration::ICPConvergenceCriteria(1e-10, 1e-10, 500));

// refine the transformation using points
icp_result = open3d::pipelines::registration::RegistrationICP(
				*sub_temp, *main_temp, voxel_size, icp_result.transformation_,
				open3d::pipelines::registration::TransformationEstimationPointToPoint(),
				open3d::pipelines::registration::ICPConvergenceCriteria(1e-16, 1e-16, 500));

Full Point Cloud Reconstruction (Slave + Master)

Full Point Cloud Reconstruction (Slave + Master)

Red: Slave Camera / Yellow: Master Camera

Red: Slave Camera / Yellow: Master Camera

Possible Improvement: Add a for loop around ICP to make the transformation more accurate

Done — Look at Next Section for Sending Data to Receiver for Fusion