Customized Integration

You can prototype a new RGB-D volumetric reconstruction algorithm with additional properties (e.g. semantic labels) while maintaining a reasonable performance. An example can be found at examples/python/t_reconstruction_system/integrate_custom.py.

Activation

The frustum block selection remains the same, but then we manually activate these blocks and obtain their buffer indices in the /tutorial/core/hashmap.ipynb:

78              config):
79            # (3, N) -> (2, N)
80            start = time.time()
81            extrinsic_dev = extrinsic.to(device, o3c.float32)
82            xyz = extrinsic_dev[:3, :3] @ voxel_coords.T() + extrinsic_dev[:3,
83                                                                           3:]
84
85            intrinsic_dev = intrinsic.to(device, o3c.float32)
86            uvd = intrinsic_dev @ xyz
87            d = uvd[2]

Voxel Indices

We can then unroll voxel indices in these blocks into a flattened array, along with their corresponding voxel coordinates.

91              config):
92
93            start = time.time()

Up to now we have finished preparation. Then we can perform customized geometry transformation in the Tensor interface, with the same fashion as we conduct in numpy or pytorch.

Geometry transformation

We first transform the voxel coordinates to the frame’s coordinate system, project them to the image space, and filter out-of-bound correspondences:

 99              config):
100            depth_readings = depth.as_tensor()[v_proj, u_proj, 0].to(
101                o3c.float32) / config.depth_scale
102            sdf = depth_readings - d_proj
103
104            mask_inlier = (depth_readings > 0) \
105                & (depth_readings < config.depth_max) \
106                & (sdf >= -trunc)
107
108            sdf[sdf >= trunc] = trunc
109            sdf = sdf / trunc
110            o3d.core.cuda.synchronize()
111            end = time.time()
112
113            start = time.time()
114            weight = vbg.attribute('weight').reshape((-1, 1))
115            tsdf = vbg.attribute('tsdf').reshape((-1, 1))
116
117            valid_voxel_indices = voxel_indices[mask_proj][mask_inlier]
118            w = weight[valid_voxel_indices]

Customized integration

With the data association, we are able to conduct integration. In this example, we show the conventional TSDF integration written in vectorized Python code:

  • Read the associated RGB-D properties from the color/depth images at the associated u, v indices;

  • Read the voxels from the voxel buffer arrays (vbg.attribute) at masked voxel_indices;

  • Perform in-place modification

118              config):
119            wp = w + 1
120
121            tsdf[valid_voxel_indices] \
122                = (tsdf[valid_voxel_indices] * w +
123                   sdf[mask_inlier].reshape(w.shape)) / (wp)
124            if config.integrate_color:
125                color = o3d.t.io.read_image(color_file_names[i]).to(device)
126                color_readings = color.as_tensor()[v_proj,
127                                                   u_proj].to(o3c.float32)
128
129
130            weight[valid_voxel_indices] = wp
131            o3d.core.cuda.synchronize()
132            end = time.time()
133
134        print('Saving to {}...'.format(config.path_npz))
135        vbg.save(config.path_npz)
136        print('Saving finished')
137
138    return vbg
139
140
141if __name__ == '__main__':
142
143    parser = ConfigParser()
144    parser.add(
145        '--config',
146        is_config_file=True,
147        help='YAML config file path. Please refer to default_config.yml as a '

You may follow the example and adapt it to your customized properties. Open3D supports conversion from and to PyTorch tensors without memory any copy, see /tutorial/core/tensor.ipynb#PyTorch-I/O-with-DLPack-memory-map. This can be use to leverage PyTorch’s capabilities such as automatic differentiation and other operators.