17 namespace registration {
20 py::module m_robust_kernel = m.def_submodule(
22 "Tensor-based robust kernel for outlier rejection.");
23 py::native_enum<RobustKernelMethod>(
24 m_robust_kernel,
"RobustKernelMethod",
"enum.Enum",
25 "Robust kernel method for outlier rejection.")
35 py::class_<RobustKernel> robust_kernel(m_robust_kernel,
"RobustKernel",
37 Base class that models a robust kernel for outlier rejection. The virtual
38 function ``weight()`` must be implemented in derived classes.
40 The main idea of a robust loss is to downweight large residuals that are
41 assumed to be caused from outliers such that their influence on the solution
42 is reduced. This is achieved by optimizing:
45 \def\argmin{\mathop{\rm argmin}}
47 x^{*} = \argmin_{x} \sum_{i=1}^{N} \rho({r_i(x)})
51 where :math:`\rho(r)` is also called the robust loss or kernel and
52 :math:`r_i(x)` is the residual.
54 Several robust kernels have been proposed to deal with different kinds of
55 outliers such as Huber, Cauchy, and others.
57 The optimization problem in :eq:`robust_loss` can be solved using the
58 iteratively reweighted least squares (IRLS) approach, which solves a sequence
59 of weighted least squares problems. We can see the relation between the least
60 squares optimization in stanad non-linear least squares and robust loss
61 optimization by comparing the respective gradients which go to zero at the
62 optimum (illustrated only for the :math:`i^\mathrm{th}` residual):
66 \frac{1}{2}\frac{\partial (w_i r^2_i(x))}{\partial{x}}
68 w_i r_i(x) \frac{\partial r_i(x)}{\partial{x}} \\
69 \label{eq:gradient_ls}
70 \frac{\partial(\rho(r_i(x)))}{\partial{x}}
72 \rho'(r_i(x)) \frac{\partial r_i(x)}{\partial{x}}.
75 By setting the weight :math:`w_i= \frac{1}{r_i(x)}\rho'(r_i(x))`, we
76 can solve the robust loss optimization problem by using the existing techniques
77 for weighted least-squares. This scheme allows standard solvers using
78 Gauss-Newton and Levenberg-Marquardt algorithms to optimize for robust losses
79 and is the one implemented in CloudViewer.
81 Then we minimize the objective function using Gauss-Newton and determine
82 increments by iteratively solving:
85 \newcommand{\mat}[1]{\mathbf{#1}}
86 \newcommand{\veca}[1]{\vec{#1}}
87 \renewcommand{\vec}[1]{\mathbf{#1}}
89 \left(\mat{J}^\top \mat{W} \mat{J}\right)^{-1}\mat{J}^\top\mat{W}\vec{r},
92 where :math:`\mat{W} \in \mathbb{R}^{n\times n}` is a diagonal matrix containing
93 weights :math:`w_i` for each residual :math:`r_i`
95 The different loss functions will only impact in the weight for each residual
96 during the optimization step.
97 Therefore, the only impact of the choice on the kernel is through its first
100 The kernels implemented so far, and the notation has been inspired by the
101 publication: **"Analysis of Robust Functions for Registration Algorithms"**, by
102 Philippe Babin et al.
104 For more information please also see: **"Adaptive Robust Kernels for
105 Non-Linear Least Squares Problems"**, by Nived Chebrolu et al.
108 py::detail::bind_copy_functions<RobustKernel>(robust_kernel);
111 const double scaling_parameter,
112 const double shape_parameter) {
117 "cloudViewer.t.pipelines.registration."
118 "RobustKernelMethod."
120 "scaling_parameter"_a = 1.0,
"shape_parameter"_a = 1.0)
122 .def_readwrite(
"scaling_parameter",
124 "Scaling parameter.")
129 "RobustKernel[scaling_parameter_={:e}, "
130 "shape_parameter_={:e}].",
filament::Texture::InternalFormat format
double shape_parameter_
Shape parameter.
RobustKernelMethod type_
Loss type.
double scaling_parameter_
Scaling parameter.
void pybind_robust_kernels(py::module &m)
Generic file read and write utility for python interface.