Refine registration#

Input arguments#

This script runs with python run_system.py [config] --refine. In [config], ["path_dataset"] should have subfolders fragments which stores fragments in .ply files and a pose graph in a .json file.

The main function runs local_refinement and optimize_posegraph_for_scene. The first function performs pairwise registration on the pairs detected by Register fragments. The second function performs multiway registration.

Fine-grained registration#

 47def multiscale_icp(source,
 48                   target,
 49                   voxel_size,
 50                   max_iter,
 51                   config,
 52                   init_transformation=np.identity(4)):
 53    current_transformation = init_transformation
 54    for i, scale in enumerate(range(len(max_iter))):  # multi-scale approach
 55        iter = max_iter[scale]
 56        distance_threshold = config["voxel_size"] * 1.4
 57        print("voxel_size {}".format(voxel_size[scale]))
 58        source_down = source.voxel_down_sample(voxel_size[scale])
 59        target_down = target.voxel_down_sample(voxel_size[scale])
 60        if config["icp_method"] == "point_to_point":
 61            result_icp = cv3d.pipelines.registration.registration_icp(
 62                source_down, target_down, distance_threshold,
 63                current_transformation,
 64                cv3d.pipelines.registration.
 65                TransformationEstimationPointToPoint(),
 66                cv3d.pipelines.registration.ICPConvergenceCriteria(
 67                    max_iteration=iter))
 68        else:
 69            source_down.estimate_normals(
 70                cv3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size[scale] *
 71                                                      2.0,
 72                                                      max_nn=30))
 73            target_down.estimate_normals(
 74                cv3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size[scale] *
 75                                                      2.0,
 76                                                      max_nn=30))
 77            if config["icp_method"] == "point_to_plane":
 78                result_icp = cv3d.pipelines.registration.registration_icp(
 79                    source_down, target_down, distance_threshold,
 80                    current_transformation,
 81                    cv3d.pipelines.registration.
 82                    TransformationEstimationPointToPlane(),
 83                    cv3d.pipelines.registration.ICPConvergenceCriteria(
 84                        max_iteration=iter))
 85            if config["icp_method"] == "color":
 86                # Colored ICP is sensitive to threshold.
 87                # Fallback to preset distance threshold that works better.
 88                # TODO: make it adjustable in the upgraded system.
 89                result_icp = cv3d.pipelines.registration.registration_colored_icp(
 90                    source_down, target_down, voxel_size[scale],
 91                    current_transformation,
 92                    cv3d.pipelines.registration.
 93                    TransformationEstimationForColoredICP(),
 94                    cv3d.pipelines.registration.ICPConvergenceCriteria(
 95                        relative_fitness=1e-6,
 96                        relative_rmse=1e-6,
 97                        max_iteration=iter))
 98            if config["icp_method"] == "generalized":
 99                result_icp = cv3d.pipelines.registration.registration_generalized_icp(
100                    source_down, target_down, distance_threshold,
101                    current_transformation,
102                    cv3d.pipelines.registration.
103                    TransformationEstimationForGeneralizedICP(),
104                    cv3d.pipelines.registration.ICPConvergenceCriteria(
105                        relative_fitness=1e-6,
106                        relative_rmse=1e-6,
107                        max_iteration=iter))
108        current_transformation = result_icp.transformation
109        if i == len(max_iter) - 1:
110            information_matrix = cv3d.pipelines.registration.get_information_matrix_from_point_clouds(
111                source_down, target_down, voxel_size[scale] * 1.4,
112                result_icp.transformation)
113
114    if config["debug_mode"]:
115        draw_registration_result_original_color(source, target,
116                                                result_icp.transformation)
117    return (result_icp.transformation, information_matrix)

Two options are given for the fine-grained registration. The color option is recommended since it uses color information to prevent drift. See [Park2017] for details.

Multiway registration#

24def update_posegraph_for_scene(s, t, transformation, information, odometry,
25                               pose_graph):
26    if t == s + 1:  # odometry case
27        odometry = np.dot(transformation, odometry)
28        odometry_inv = np.linalg.inv(odometry)
29        pose_graph.nodes.append(
30            cv3d.pipelines.registration.PoseGraphNode(odometry_inv))
31        pose_graph.edges.append(
32            cv3d.pipelines.registration.PoseGraphEdge(s,
33                                                      t,
34                                                      transformation,
35                                                      information,
36                                                      uncertain=False))
37    else:  # loop closure case
38        pose_graph.edges.append(
39            cv3d.pipelines.registration.PoseGraphEdge(s,
40                                                      t,
41                                                      transformation,
42                                                      information,
43                                                      uncertain=True))
44    return (odometry, pose_graph)

This script uses the technique demonstrated in Multiway registration. Function update_posegraph_for_scene builds a pose graph for multiway registration of all fragments. Each graph node represents a fragment and its pose which transforms the geometry to the global space.

Once a pose graph is built, function optimize_posegraph_for_scene is called for multiway registration.

46def optimize_posegraph_for_scene(path_dataset, config):
47    pose_graph_name = join(path_dataset, config["template_global_posegraph"])
48    pose_graph_optimized_name = join(
49        path_dataset, config["template_global_posegraph_optimized"])
50    run_posegraph_optimization(pose_graph_name, pose_graph_optimized_name,
51            max_correspondence_distance = config["voxel_size"] * 1.4,
52            preference_loop_closure = \
53            config["preference_loop_closure_registration"])

Main registration loop#

The function make_posegraph_for_refined_scene below calls all the functions introduced above.

156def make_posegraph_for_refined_scene(ply_file_names, config):
157    pose_graph = cv3d.io.read_pose_graph(
158        join(config["path_dataset"],
159             config["template_global_posegraph_optimized"]))
160
161    n_files = len(ply_file_names)
162    matching_results = {}
163    for edge in pose_graph.edges:
164        s = edge.source_node_id
165        t = edge.target_node_id
166        matching_results[s * n_files + t] = \
167            matching_result(s, t, edge.transformation)
168
169    if config["python_multi_threading"] is True:
170        os.environ['OMP_NUM_THREADS'] = '1'
171        max_workers = max(
172            1, min(multiprocessing.cpu_count() - 1, len(pose_graph.edges)))
173        mp_context = multiprocessing.get_context('spawn')
174        with mp_context.Pool(processes=max_workers) as pool:
175            args = [(ply_file_names, v.s, v.t, v.transformation, config)
176                    for k, v in matching_results.items()]
177            results = pool.starmap(register_point_cloud_pair, args)
178
179        for i, r in enumerate(matching_results):
180            matching_results[r].transformation = results[i][0]
181            matching_results[r].information = results[i][1]
182    else:
183        for r in matching_results:
184            (matching_results[r].transformation,
185             matching_results[r].information) = \
186                register_point_cloud_pair(ply_file_names,
187                                          matching_results[r].s, matching_results[r].t,
188                                          matching_results[r].transformation, config)
189
190    pose_graph_new = cv3d.pipelines.registration.PoseGraph()
191    odometry = np.identity(4)
192    pose_graph_new.nodes.append(
193        cv3d.pipelines.registration.PoseGraphNode(odometry))
194    for r in matching_results:
195        (odometry, pose_graph_new) = update_posegraph_for_scene(
196            matching_results[r].s, matching_results[r].t,
197            matching_results[r].transformation, matching_results[r].information,
198            odometry, pose_graph_new)
199    print(pose_graph_new)
200    cv3d.io.write_pose_graph(
201        join(config["path_dataset"], config["template_refined_posegraph"]),
202        pose_graph_new)

The main workflow is: pairwise local refinement -> multiway registration.

Results#

The pose graph optimization outputs the following messages:

[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial     ] residual : 1.208286e+04, lambda : 1.706306e+01
[Iteration 00] residual : 2.410383e+03, valid edges : 22, time : 0.000 sec.
[Iteration 01] residual : 8.127856e+01, valid edges : 22, time : 0.000 sec.
[Iteration 02] residual : 8.031355e+01, valid edges : 22, time : 0.000 sec.
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.001 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial     ] residual : 8.031355e+01, lambda : 1.716826e+01
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.000 sec.
CompensateReferencePoseGraphNode : reference : 0

There are 14 fragments and 52 valid matching pairs between fragments. After 23 iterations, 11 edges are detected to be false positives. After they are pruned, pose graph optimization runs again to achieve tight alignment.