nnls_tutorial.rst 43 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040
  1. .. highlight:: c++
  2. .. default-domain:: cpp
  3. .. _chapter-nnls_tutorial:
  4. ========================
  5. Non-linear Least Squares
  6. ========================
  7. Introduction
  8. ============
  9. Ceres can solve bounds constrained robustified non-linear least
  10. squares problems of the form
  11. .. math:: :label: ceresproblem
  12. \min_{\mathbf{x}} &\quad \frac{1}{2}\sum_{i} \rho_i\left(\left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2\right) \\
  13. \text{s.t.} &\quad l_j \le x_j \le u_j
  14. Problems of this form comes up in a broad range of areas across
  15. science and engineering - from `fitting curves`_ in statistics, to
  16. constructing `3D models from photographs`_ in computer vision.
  17. .. _fitting curves: http://en.wikipedia.org/wiki/Nonlinear_regression
  18. .. _3D models from photographs: http://en.wikipedia.org/wiki/Bundle_adjustment
  19. In this chapter we will learn how to solve :eq:`ceresproblem` using
  20. Ceres Solver. Full working code for all the examples described in this
  21. chapter and more can be found in the `examples
  22. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/>`_
  23. directory.
  24. The expression
  25. :math:`\rho_i\left(\left\|f_i\left(x_{i_1},...,x_{i_k}\right)\right\|^2\right)`
  26. is known as a ``ResidualBlock``, where :math:`f_i(\cdot)` is a
  27. :class:`CostFunction` that depends on the parameter blocks
  28. :math:`\left[x_{i_1},... , x_{i_k}\right]`. In most optimization
  29. problems small groups of scalars occur together. For example the three
  30. components of a translation vector and the four components of the
  31. quaternion that define the pose of a camera. We refer to such a group
  32. of small scalars as a ``ParameterBlock``. Of course a
  33. ``ParameterBlock`` can just be a single parameter. :math:`l_j` and
  34. :math:`u_j` are bounds on the parameter block :math:`x_j`.
  35. :math:`\rho_i` is a :class:`LossFunction`. A :class:`LossFunction` is
  36. a scalar function that is used to reduce the influence of outliers on
  37. the solution of non-linear least squares problems.
  38. As a special case, when :math:`\rho_i(x) = x`, i.e., the identity
  39. function, and :math:`l_j = -\infty` and :math:`u_j = \infty` we get
  40. the more familiar `non-linear least squares problem
  41. <http://en.wikipedia.org/wiki/Non-linear_least_squares>`_.
  42. .. math:: \frac{1}{2}\sum_{i} \left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2.
  43. :label: ceresproblemnonrobust
  44. .. _section-hello-world:
  45. Hello World!
  46. ============
  47. To get started, consider the problem of finding the minimum of the
  48. function
  49. .. math:: \frac{1}{2}(10 -x)^2.
  50. This is a trivial problem, whose minimum is located at :math:`x = 10`,
  51. but it is a good place to start to illustrate the basics of solving a
  52. problem with Ceres [#f1]_.
  53. The first step is to write a functor that will evaluate this the
  54. function :math:`f(x) = 10 - x`:
  55. .. code-block:: c++
  56. struct CostFunctor {
  57. template <typename T>
  58. bool operator()(const T* const x, T* residual) const {
  59. residual[0] = 10.0 - x[0];
  60. return true;
  61. }
  62. };
  63. The important thing to note here is that ``operator()`` is a templated
  64. method, which assumes that all its inputs and outputs are of some type
  65. ``T``. The use of templating here allows Ceres to call
  66. ``CostFunctor::operator<T>()``, with ``T=double`` when just the value
  67. of the residual is needed, and with a special type ``T=Jet`` when the
  68. Jacobians are needed. In :ref:`section-derivatives` we will discuss the
  69. various ways of supplying derivatives to Ceres in more detail.
  70. Once we have a way of computing the residual function, it is now time
  71. to construct a non-linear least squares problem using it and have
  72. Ceres solve it.
  73. .. code-block:: c++
  74. int main(int argc, char** argv) {
  75. google::InitGoogleLogging(argv[0]);
  76. // The variable to solve for with its initial value.
  77. double initial_x = 5.0;
  78. double x = initial_x;
  79. // Build the problem.
  80. Problem problem;
  81. // Set up the only cost function (also known as residual). This uses
  82. // auto-differentiation to obtain the derivative (jacobian).
  83. CostFunction* cost_function =
  84. new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
  85. problem.AddResidualBlock(cost_function, nullptr, &x);
  86. // Run the solver!
  87. Solver::Options options;
  88. options.linear_solver_type = ceres::DENSE_QR;
  89. options.minimizer_progress_to_stdout = true;
  90. Solver::Summary summary;
  91. Solve(options, &problem, &summary);
  92. std::cout << summary.BriefReport() << "\n";
  93. std::cout << "x : " << initial_x
  94. << " -> " << x << "\n";
  95. return 0;
  96. }
  97. :class:`AutoDiffCostFunction` takes a ``CostFunctor`` as input,
  98. automatically differentiates it and gives it a :class:`CostFunction`
  99. interface.
  100. Compiling and running `examples/helloworld.cc
  101. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
  102. gives us
  103. .. code-block:: bash
  104. iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
  105. 0 4.512500e+01 0.00e+00 9.50e+00 0.00e+00 0.00e+00 1.00e+04 0 5.33e-04 3.46e-03
  106. 1 4.511598e-07 4.51e+01 9.50e-04 9.50e+00 1.00e+00 3.00e+04 1 5.00e-04 4.05e-03
  107. 2 5.012552e-16 4.51e-07 3.17e-08 9.50e-04 1.00e+00 9.00e+04 1 1.60e-05 4.09e-03
  108. Ceres Solver Report: Iterations: 2, Initial cost: 4.512500e+01, Final cost: 5.012552e-16, Termination: CONVERGENCE
  109. x : 0.5 -> 10
  110. Starting from a :math:`x=5`, the solver in two iterations goes to 10
  111. [#f2]_. The careful reader will note that this is a linear problem and
  112. one linear solve should be enough to get the optimal value. The
  113. default configuration of the solver is aimed at non-linear problems,
  114. and for reasons of simplicity we did not change it in this example. It
  115. is indeed possible to obtain the solution to this problem using Ceres
  116. in one iteration. Also note that the solver did get very close to the
  117. optimal function value of 0 in the very first iteration. We will
  118. discuss these issues in greater detail when we talk about convergence
  119. and parameter settings for Ceres.
  120. .. rubric:: Footnotes
  121. .. [#f1] `examples/helloworld.cc
  122. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
  123. .. [#f2] Actually the solver ran for three iterations, and it was
  124. by looking at the value returned by the linear solver in the third
  125. iteration, it observed that the update to the parameter block was too
  126. small and declared convergence. Ceres only prints out the display at
  127. the end of an iteration, and terminates as soon as it detects
  128. convergence, which is why you only see two iterations here and not
  129. three.
  130. .. _section-derivatives:
  131. Derivatives
  132. ===========
  133. Ceres Solver like most optimization packages, depends on being able to
  134. evaluate the value and the derivatives of each term in the objective
  135. function at arbitrary parameter values. Doing so correctly and
  136. efficiently is essential to getting good results. Ceres Solver
  137. provides a number of ways of doing so. You have already seen one of
  138. them in action --
  139. Automatic Differentiation in `examples/helloworld.cc
  140. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
  141. We now consider the other two possibilities. Analytic and numeric
  142. derivatives.
  143. Numeric Derivatives
  144. -------------------
  145. In some cases, its not possible to define a templated cost functor,
  146. for example when the evaluation of the residual involves a call to a
  147. library function that you do not have control over. In such a
  148. situation, numerical differentiation can be used. The user defines a
  149. functor which computes the residual value and construct a
  150. :class:`NumericDiffCostFunction` using it. e.g., for :math:`f(x) = 10 - x`
  151. the corresponding functor would be
  152. .. code-block:: c++
  153. struct NumericDiffCostFunctor {
  154. bool operator()(const double* const x, double* residual) const {
  155. residual[0] = 10.0 - x[0];
  156. return true;
  157. }
  158. };
  159. Which is added to the :class:`Problem` as:
  160. .. code-block:: c++
  161. CostFunction* cost_function =
  162. new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>(
  163. new NumericDiffCostFunctor);
  164. problem.AddResidualBlock(cost_function, nullptr, &x);
  165. Notice the parallel from when we were using automatic differentiation
  166. .. code-block:: c++
  167. CostFunction* cost_function =
  168. new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
  169. problem.AddResidualBlock(cost_function, nullptr, &x);
  170. The construction looks almost identical to the one used for automatic
  171. differentiation, except for an extra template parameter that indicates
  172. the kind of finite differencing scheme to be used for computing the
  173. numerical derivatives [#f3]_. For more details see the documentation
  174. for :class:`NumericDiffCostFunction`.
  175. **Generally speaking we recommend automatic differentiation instead of
  176. numeric differentiation. The use of C++ templates makes automatic
  177. differentiation efficient, whereas numeric differentiation is
  178. expensive, prone to numeric errors, and leads to slower convergence.**
  179. Analytic Derivatives
  180. --------------------
  181. In some cases, using automatic differentiation is not possible. For
  182. example, it may be the case that it is more efficient to compute the
  183. derivatives in closed form instead of relying on the chain rule used
  184. by the automatic differentiation code.
  185. In such cases, it is possible to supply your own residual and jacobian
  186. computation code. To do this, define a subclass of
  187. :class:`CostFunction` or :class:`SizedCostFunction` if you know the
  188. sizes of the parameters and residuals at compile time. Here for
  189. example is ``SimpleCostFunction`` that implements :math:`f(x) = 10 -
  190. x`.
  191. .. code-block:: c++
  192. class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> {
  193. public:
  194. virtual ~QuadraticCostFunction() {}
  195. virtual bool Evaluate(double const* const* parameters,
  196. double* residuals,
  197. double** jacobians) const {
  198. const double x = parameters[0][0];
  199. residuals[0] = 10 - x;
  200. // Compute the Jacobian if asked for.
  201. if (jacobians != nullptr && jacobians[0] != nullptr) {
  202. jacobians[0][0] = -1;
  203. }
  204. return true;
  205. }
  206. };
  207. ``SimpleCostFunction::Evaluate`` is provided with an input array of
  208. ``parameters``, an output array ``residuals`` for residuals and an
  209. output array ``jacobians`` for Jacobians. The ``jacobians`` array is
  210. optional, ``Evaluate`` is expected to check when it is non-null, and
  211. if it is the case then fill it with the values of the derivative of
  212. the residual function. In this case since the residual function is
  213. linear, the Jacobian is constant [#f4]_ .
  214. As can be seen from the above code fragments, implementing
  215. :class:`CostFunction` objects is a bit tedious. We recommend that
  216. unless you have a good reason to manage the jacobian computation
  217. yourself, you use :class:`AutoDiffCostFunction` or
  218. :class:`NumericDiffCostFunction` to construct your residual blocks.
  219. More About Derivatives
  220. ----------------------
  221. Computing derivatives is by far the most complicated part of using
  222. Ceres, and depending on the circumstance the user may need more
  223. sophisticated ways of computing derivatives. This section just
  224. scratches the surface of how derivatives can be supplied to
  225. Ceres. Once you are comfortable with using
  226. :class:`NumericDiffCostFunction` and :class:`AutoDiffCostFunction` we
  227. recommend taking a look at :class:`DynamicAutoDiffCostFunction`,
  228. :class:`CostFunctionToFunctor`, :class:`NumericDiffFunctor` and
  229. :class:`ConditionedCostFunction` for more advanced ways of
  230. constructing and computing cost functions.
  231. .. rubric:: Footnotes
  232. .. [#f3] `examples/helloworld_numeric_diff.cc
  233. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld_numeric_diff.cc>`_.
  234. .. [#f4] `examples/helloworld_analytic_diff.cc
  235. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld_analytic_diff.cc>`_.
  236. .. _section-powell:
  237. Powell's Function
  238. =================
  239. Consider now a slightly more complicated example -- the minimization
  240. of Powell's function. Let :math:`x = \left[x_1, x_2, x_3, x_4 \right]`
  241. and
  242. .. math::
  243. \begin{align}
  244. f_1(x) &= x_1 + 10x_2 \\
  245. f_2(x) &= \sqrt{5} (x_3 - x_4)\\
  246. f_3(x) &= (x_2 - 2x_3)^2\\
  247. f_4(x) &= \sqrt{10} (x_1 - x_4)^2\\
  248. F(x) &= \left[f_1(x),\ f_2(x),\ f_3(x),\ f_4(x) \right]
  249. \end{align}
  250. :math:`F(x)` is a function of four parameters, has four residuals
  251. and we wish to find :math:`x` such that :math:`\frac{1}{2}\|F(x)\|^2`
  252. is minimized.
  253. Again, the first step is to define functors that evaluate of the terms
  254. in the objective functor. Here is the code for evaluating
  255. :math:`f_4(x_1, x_4)`:
  256. .. code-block:: c++
  257. struct F4 {
  258. template <typename T>
  259. bool operator()(const T* const x1, const T* const x4, T* residual) const {
  260. residual[0] = sqrt(10.0) * (x1[0] - x4[0]) * (x1[0] - x4[0]);
  261. return true;
  262. }
  263. };
  264. Similarly, we can define classes ``F1``, ``F2`` and ``F3`` to evaluate
  265. :math:`f_1(x_1, x_2)`, :math:`f_2(x_3, x_4)` and :math:`f_3(x_2, x_3)`
  266. respectively. Using these, the problem can be constructed as follows:
  267. .. code-block:: c++
  268. double x1 = 3.0; double x2 = -1.0; double x3 = 0.0; double x4 = 1.0;
  269. Problem problem;
  270. // Add residual terms to the problem using the autodiff
  271. // wrapper to get the derivatives automatically.
  272. problem.AddResidualBlock(
  273. new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), nullptr, &x1, &x2);
  274. problem.AddResidualBlock(
  275. new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), nullptr, &x3, &x4);
  276. problem.AddResidualBlock(
  277. new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), nullptr, &x2, &x3);
  278. problem.AddResidualBlock(
  279. new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), nullptr, &x1, &x4);
  280. Note that each ``ResidualBlock`` only depends on the two parameters
  281. that the corresponding residual object depends on and not on all four
  282. parameters. Compiling and running `examples/powell.cc
  283. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_
  284. gives us:
  285. .. code-block:: bash
  286. Initial x1 = 3, x2 = -1, x3 = 0, x4 = 1
  287. iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
  288. 0 1.075000e+02 0.00e+00 1.55e+02 0.00e+00 0.00e+00 1.00e+04 0 4.95e-04 2.30e-03
  289. 1 5.036190e+00 1.02e+02 2.00e+01 2.16e+00 9.53e-01 3.00e+04 1 4.39e-05 2.40e-03
  290. 2 3.148168e-01 4.72e+00 2.50e+00 6.23e-01 9.37e-01 9.00e+04 1 9.06e-06 2.43e-03
  291. 3 1.967760e-02 2.95e-01 3.13e-01 3.08e-01 9.37e-01 2.70e+05 1 8.11e-06 2.45e-03
  292. 4 1.229900e-03 1.84e-02 3.91e-02 1.54e-01 9.37e-01 8.10e+05 1 6.91e-06 2.48e-03
  293. 5 7.687123e-05 1.15e-03 4.89e-03 7.69e-02 9.37e-01 2.43e+06 1 7.87e-06 2.50e-03
  294. 6 4.804625e-06 7.21e-05 6.11e-04 3.85e-02 9.37e-01 7.29e+06 1 5.96e-06 2.52e-03
  295. 7 3.003028e-07 4.50e-06 7.64e-05 1.92e-02 9.37e-01 2.19e+07 1 5.96e-06 2.55e-03
  296. 8 1.877006e-08 2.82e-07 9.54e-06 9.62e-03 9.37e-01 6.56e+07 1 5.96e-06 2.57e-03
  297. 9 1.173223e-09 1.76e-08 1.19e-06 4.81e-03 9.37e-01 1.97e+08 1 7.87e-06 2.60e-03
  298. 10 7.333425e-11 1.10e-09 1.49e-07 2.40e-03 9.37e-01 5.90e+08 1 6.20e-06 2.63e-03
  299. 11 4.584044e-12 6.88e-11 1.86e-08 1.20e-03 9.37e-01 1.77e+09 1 6.91e-06 2.65e-03
  300. 12 2.865573e-13 4.30e-12 2.33e-09 6.02e-04 9.37e-01 5.31e+09 1 5.96e-06 2.67e-03
  301. 13 1.791438e-14 2.69e-13 2.91e-10 3.01e-04 9.37e-01 1.59e+10 1 7.15e-06 2.69e-03
  302. Ceres Solver v1.12.0 Solve Report
  303. ----------------------------------
  304. Original Reduced
  305. Parameter blocks 4 4
  306. Parameters 4 4
  307. Residual blocks 4 4
  308. Residual 4 4
  309. Minimizer TRUST_REGION
  310. Dense linear algebra library EIGEN
  311. Trust region strategy LEVENBERG_MARQUARDT
  312. Given Used
  313. Linear solver DENSE_QR DENSE_QR
  314. Threads 1 1
  315. Linear solver threads 1 1
  316. Cost:
  317. Initial 1.075000e+02
  318. Final 1.791438e-14
  319. Change 1.075000e+02
  320. Minimizer iterations 14
  321. Successful steps 14
  322. Unsuccessful steps 0
  323. Time (in seconds):
  324. Preprocessor 0.002
  325. Residual evaluation 0.000
  326. Jacobian evaluation 0.000
  327. Linear solver 0.000
  328. Minimizer 0.001
  329. Postprocessor 0.000
  330. Total 0.005
  331. Termination: CONVERGENCE (Gradient tolerance reached. Gradient max norm: 3.642190e-11 <= 1.000000e-10)
  332. Final x1 = 0.000292189, x2 = -2.92189e-05, x3 = 4.79511e-05, x4 = 4.79511e-05
  333. It is easy to see that the optimal solution to this problem is at
  334. :math:`x_1=0, x_2=0, x_3=0, x_4=0` with an objective function value of
  335. :math:`0`. In 10 iterations, Ceres finds a solution with an objective
  336. function value of :math:`4\times 10^{-12}`.
  337. .. rubric:: Footnotes
  338. .. [#f5] `examples/powell.cc
  339. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_.
  340. .. _section-fitting:
  341. Curve Fitting
  342. =============
  343. The examples we have seen until now are simple optimization problems
  344. with no data. The original purpose of least squares and non-linear
  345. least squares analysis was fitting curves to data. It is only
  346. appropriate that we now consider an example of such a problem
  347. [#f6]_. It contains data generated by sampling the curve :math:`y =
  348. e^{0.3x + 0.1}` and adding Gaussian noise with standard deviation
  349. :math:`\sigma = 0.2`. Let us fit some data to the curve
  350. .. math:: y = e^{mx + c}.
  351. We begin by defining a templated object to evaluate the
  352. residual. There will be a residual for each observation.
  353. .. code-block:: c++
  354. struct ExponentialResidual {
  355. ExponentialResidual(double x, double y)
  356. : x_(x), y_(y) {}
  357. template <typename T>
  358. bool operator()(const T* const m, const T* const c, T* residual) const {
  359. residual[0] = y_ - exp(m[0] * x_ + c[0]);
  360. return true;
  361. }
  362. private:
  363. // Observations for a sample.
  364. const double x_;
  365. const double y_;
  366. };
  367. Assuming the observations are in a :math:`2n` sized array called
  368. ``data`` the problem construction is a simple matter of creating a
  369. :class:`CostFunction` for every observation.
  370. .. code-block:: c++
  371. double m = 0.0;
  372. double c = 0.0;
  373. Problem problem;
  374. for (int i = 0; i < kNumObservations; ++i) {
  375. CostFunction* cost_function =
  376. new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
  377. new ExponentialResidual(data[2 * i], data[2 * i + 1]));
  378. problem.AddResidualBlock(cost_function, nullptr, &m, &c);
  379. }
  380. Compiling and running `examples/curve_fitting.cc
  381. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/curve_fitting.cc>`_
  382. gives us:
  383. .. code-block:: bash
  384. iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
  385. 0 1.211734e+02 0.00e+00 3.61e+02 0.00e+00 0.00e+00 1.00e+04 0 5.34e-04 2.56e-03
  386. 1 1.211734e+02 -2.21e+03 0.00e+00 7.52e-01 -1.87e+01 5.00e+03 1 4.29e-05 3.25e-03
  387. 2 1.211734e+02 -2.21e+03 0.00e+00 7.51e-01 -1.86e+01 1.25e+03 1 1.10e-05 3.28e-03
  388. 3 1.211734e+02 -2.19e+03 0.00e+00 7.48e-01 -1.85e+01 1.56e+02 1 1.41e-05 3.31e-03
  389. 4 1.211734e+02 -2.02e+03 0.00e+00 7.22e-01 -1.70e+01 9.77e+00 1 1.00e-05 3.34e-03
  390. 5 1.211734e+02 -7.34e+02 0.00e+00 5.78e-01 -6.32e+00 3.05e-01 1 1.00e-05 3.36e-03
  391. 6 3.306595e+01 8.81e+01 4.10e+02 3.18e-01 1.37e+00 9.16e-01 1 2.79e-05 3.41e-03
  392. 7 6.426770e+00 2.66e+01 1.81e+02 1.29e-01 1.10e+00 2.75e+00 1 2.10e-05 3.45e-03
  393. 8 3.344546e+00 3.08e+00 5.51e+01 3.05e-02 1.03e+00 8.24e+00 1 2.10e-05 3.48e-03
  394. 9 1.987485e+00 1.36e+00 2.33e+01 8.87e-02 9.94e-01 2.47e+01 1 2.10e-05 3.52e-03
  395. 10 1.211585e+00 7.76e-01 8.22e+00 1.05e-01 9.89e-01 7.42e+01 1 2.10e-05 3.56e-03
  396. 11 1.063265e+00 1.48e-01 1.44e+00 6.06e-02 9.97e-01 2.22e+02 1 2.60e-05 3.61e-03
  397. 12 1.056795e+00 6.47e-03 1.18e-01 1.47e-02 1.00e+00 6.67e+02 1 2.10e-05 3.64e-03
  398. 13 1.056751e+00 4.39e-05 3.79e-03 1.28e-03 1.00e+00 2.00e+03 1 2.10e-05 3.68e-03
  399. Ceres Solver Report: Iterations: 13, Initial cost: 1.211734e+02, Final cost: 1.056751e+00, Termination: CONVERGENCE
  400. Initial m: 0 c: 0
  401. Final m: 0.291861 c: 0.131439
  402. Starting from parameter values :math:`m = 0, c=0` with an initial
  403. objective function value of :math:`121.173` Ceres finds a solution
  404. :math:`m= 0.291861, c = 0.131439` with an objective function value of
  405. :math:`1.05675`. These values are a bit different than the
  406. parameters of the original model :math:`m=0.3, c= 0.1`, but this is
  407. expected. When reconstructing a curve from noisy data, we expect to
  408. see such deviations. Indeed, if you were to evaluate the objective
  409. function for :math:`m=0.3, c=0.1`, the fit is worse with an objective
  410. function value of :math:`1.082425`. The figure below illustrates the fit.
  411. .. figure:: least_squares_fit.png
  412. :figwidth: 500px
  413. :height: 400px
  414. :align: center
  415. Least squares curve fitting.
  416. .. rubric:: Footnotes
  417. .. [#f6] `examples/curve_fitting.cc
  418. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/curve_fitting.cc>`_
  419. Robust Curve Fitting
  420. =====================
  421. Now suppose the data we are given has some outliers, i.e., we have
  422. some points that do not obey the noise model. If we were to use the
  423. code above to fit such data, we would get a fit that looks as
  424. below. Notice how the fitted curve deviates from the ground truth.
  425. .. figure:: non_robust_least_squares_fit.png
  426. :figwidth: 500px
  427. :height: 400px
  428. :align: center
  429. To deal with outliers, a standard technique is to use a
  430. :class:`LossFunction`. Loss functions reduce the influence of
  431. residual blocks with high residuals, usually the ones corresponding to
  432. outliers. To associate a loss function with a residual block, we change
  433. .. code-block:: c++
  434. problem.AddResidualBlock(cost_function, nullptr , &m, &c);
  435. to
  436. .. code-block:: c++
  437. problem.AddResidualBlock(cost_function, new CauchyLoss(0.5) , &m, &c);
  438. :class:`CauchyLoss` is one of the loss functions that ships with Ceres
  439. Solver. The argument :math:`0.5` specifies the scale of the loss
  440. function. As a result, we get the fit below [#f7]_. Notice how the
  441. fitted curve moves back closer to the ground truth curve.
  442. .. figure:: robust_least_squares_fit.png
  443. :figwidth: 500px
  444. :height: 400px
  445. :align: center
  446. Using :class:`LossFunction` to reduce the effect of outliers on a
  447. least squares fit.
  448. .. rubric:: Footnotes
  449. .. [#f7] `examples/robust_curve_fitting.cc
  450. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/robust_curve_fitting.cc>`_
  451. Bundle Adjustment
  452. =================
  453. One of the main reasons for writing Ceres was our need to solve large
  454. scale bundle adjustment problems [HartleyZisserman]_, [Triggs]_.
  455. Given a set of measured image feature locations and correspondences,
  456. the goal of bundle adjustment is to find 3D point positions and camera
  457. parameters that minimize the reprojection error. This optimization
  458. problem is usually formulated as a non-linear least squares problem,
  459. where the error is the squared :math:`L_2` norm of the difference between
  460. the observed feature location and the projection of the corresponding
  461. 3D point on the image plane of the camera. Ceres has extensive support
  462. for solving bundle adjustment problems.
  463. Let us solve a problem from the `BAL
  464. <http://grail.cs.washington.edu/projects/bal/>`_ dataset [#f8]_.
  465. The first step as usual is to define a templated functor that computes
  466. the reprojection error/residual. The structure of the functor is
  467. similar to the ``ExponentialResidual``, in that there is an
  468. instance of this object responsible for each image observation.
  469. Each residual in a BAL problem depends on a three dimensional point
  470. and a nine parameter camera. The nine parameters defining the camera
  471. are: three for rotation as a Rodrigues' axis-angle vector, three
  472. for translation, one for focal length and two for radial distortion.
  473. The details of this camera model can be found the `Bundler homepage
  474. <http://phototour.cs.washington.edu/bundler/>`_ and the `BAL homepage
  475. <http://grail.cs.washington.edu/projects/bal/>`_.
  476. .. code-block:: c++
  477. struct SnavelyReprojectionError {
  478. SnavelyReprojectionError(double observed_x, double observed_y)
  479. : observed_x(observed_x), observed_y(observed_y) {}
  480. template <typename T>
  481. bool operator()(const T* const camera,
  482. const T* const point,
  483. T* residuals) const {
  484. // camera[0,1,2] are the angle-axis rotation.
  485. T p[3];
  486. ceres::AngleAxisRotatePoint(camera, point, p);
  487. // camera[3,4,5] are the translation.
  488. p[0] += camera[3]; p[1] += camera[4]; p[2] += camera[5];
  489. // Compute the center of distortion. The sign change comes from
  490. // the camera model that Noah Snavely's Bundler assumes, whereby
  491. // the camera coordinate system has a negative z axis.
  492. T xp = - p[0] / p[2];
  493. T yp = - p[1] / p[2];
  494. // Apply second and fourth order radial distortion.
  495. const T& l1 = camera[7];
  496. const T& l2 = camera[8];
  497. T r2 = xp*xp + yp*yp;
  498. T distortion = 1.0 + r2 * (l1 + l2 * r2);
  499. // Compute final projected point position.
  500. const T& focal = camera[6];
  501. T predicted_x = focal * distortion * xp;
  502. T predicted_y = focal * distortion * yp;
  503. // The error is the difference between the predicted and observed position.
  504. residuals[0] = predicted_x - T(observed_x);
  505. residuals[1] = predicted_y - T(observed_y);
  506. return true;
  507. }
  508. // Factory to hide the construction of the CostFunction object from
  509. // the client code.
  510. static ceres::CostFunction* Create(const double observed_x,
  511. const double observed_y) {
  512. return (new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
  513. new SnavelyReprojectionError(observed_x, observed_y)));
  514. }
  515. double observed_x;
  516. double observed_y;
  517. };
  518. Note that unlike the examples before, this is a non-trivial function
  519. and computing its analytic Jacobian is a bit of a pain. Automatic
  520. differentiation makes life much simpler. The function
  521. :func:`AngleAxisRotatePoint` and other functions for manipulating
  522. rotations can be found in ``include/ceres/rotation.h``.
  523. Given this functor, the bundle adjustment problem can be constructed
  524. as follows:
  525. .. code-block:: c++
  526. ceres::Problem problem;
  527. for (int i = 0; i < bal_problem.num_observations(); ++i) {
  528. ceres::CostFunction* cost_function =
  529. SnavelyReprojectionError::Create(
  530. bal_problem.observations()[2 * i + 0],
  531. bal_problem.observations()[2 * i + 1]);
  532. problem.AddResidualBlock(cost_function,
  533. nullptr /* squared loss */,
  534. bal_problem.mutable_camera_for_observation(i),
  535. bal_problem.mutable_point_for_observation(i));
  536. }
  537. Notice that the problem construction for bundle adjustment is very
  538. similar to the curve fitting example -- one term is added to the
  539. objective function per observation.
  540. Since this is a large sparse problem (well large for ``DENSE_QR``
  541. anyways), one way to solve this problem is to set
  542. :member:`Solver::Options::linear_solver_type` to
  543. ``SPARSE_NORMAL_CHOLESKY`` and call :func:`Solve`. And while this is
  544. a reasonable thing to do, bundle adjustment problems have a special
  545. sparsity structure that can be exploited to solve them much more
  546. efficiently. Ceres provides three specialized solvers (collectively
  547. known as Schur-based solvers) for this task. The example code uses the
  548. simplest of them ``DENSE_SCHUR``.
  549. .. code-block:: c++
  550. ceres::Solver::Options options;
  551. options.linear_solver_type = ceres::DENSE_SCHUR;
  552. options.minimizer_progress_to_stdout = true;
  553. ceres::Solver::Summary summary;
  554. ceres::Solve(options, &problem, &summary);
  555. std::cout << summary.FullReport() << "\n";
  556. For a more sophisticated bundle adjustment example which demonstrates
  557. the use of Ceres' more advanced features including its various linear
  558. solvers, robust loss functions and manifolds see
  559. `examples/bundle_adjuster.cc
  560. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
  561. .. rubric:: Footnotes
  562. .. [#f8] `examples/simple_bundle_adjuster.cc
  563. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/simple_bundle_adjuster.cc>`_
  564. Other Examples
  565. ==============
  566. Besides the examples in this chapter, the `example
  567. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/>`_
  568. directory contains a number of other examples:
  569. #. `bundle_adjuster.cc
  570. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
  571. shows how to use the various features of Ceres to solve bundle
  572. adjustment problems.
  573. #. `circle_fit.cc
  574. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/circle_fit.cc>`_
  575. shows how to fit data to a circle.
  576. #. `ellipse_approximation.cc
  577. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/ellipse_approximation.cc>`_
  578. fits points randomly distributed on an ellipse with an approximate
  579. line segment contour. This is done by jointly optimizing the
  580. control points of the line segment contour along with the preimage
  581. positions for the data points. The purpose of this example is to
  582. show an example use case for ``Solver::Options::dynamic_sparsity``,
  583. and how it can benefit problems which are numerically dense but
  584. dynamically sparse.
  585. #. `denoising.cc
  586. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/denoising.cc>`_
  587. implements image denoising using the `Fields of Experts
  588. <http://www.gris.informatik.tu-darmstadt.de/~sroth/research/foe/index.html>`_
  589. model.
  590. #. `nist.cc
  591. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/nist.cc>`_
  592. implements and attempts to solves the `NIST
  593. <http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml>`_
  594. non-linear regression problems.
  595. #. `more_garbow_hillstrom.cc
  596. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/more_garbow_hillstrom.cc>`_
  597. A subset of the test problems from the paper
  598. Testing Unconstrained Optimization Software
  599. Jorge J. More, Burton S. Garbow and Kenneth E. Hillstrom
  600. ACM Transactions on Mathematical Software, 7(1), pp. 17-41, 1981
  601. which were augmented with bounds and used for testing bounds
  602. constrained optimization algorithms by
  603. A Trust Region Approach to Linearly Constrained Optimization
  604. David M. Gay
  605. Numerical Analysis (Griffiths, D.F., ed.), pp. 72-105
  606. Lecture Notes in Mathematics 1066, Springer Verlag, 1984.
  607. #. `libmv_bundle_adjuster.cc
  608. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/libmv_bundle_adjuster.cc>`_
  609. is the bundle adjustment algorithm used by `Blender <www.blender.org>`_/libmv.
  610. #. `libmv_homography.cc
  611. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/libmv_homography.cc>`_
  612. This file demonstrates solving for a homography between two sets of
  613. points and using a custom exit criterion by having a callback check
  614. for image-space error.
  615. #. `robot_pose_mle.cc
  616. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/robot_pose_mle.cc>`_
  617. This example demonstrates how to use the ``DynamicAutoDiffCostFunction``
  618. variant of CostFunction. The ``DynamicAutoDiffCostFunction`` is meant to
  619. be used in cases where the number of parameter blocks or the sizes are not
  620. known at compile time.
  621. This example simulates a robot traversing down a 1-dimension hallway with
  622. noise odometry readings and noisy range readings of the end of the hallway.
  623. By fusing the noisy odometry and sensor readings this example demonstrates
  624. how to compute the maximum likelihood estimate (MLE) of the robot's pose at
  625. each timestep.
  626. #. `slam/pose_graph_2d/pose_graph_2d.cc
  627. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/slam/pose_graph_2d/pose_graph_2d.cc>`_
  628. The Simultaneous Localization and Mapping (SLAM) problem consists of building
  629. a map of an unknown environment while simultaneously localizing against this
  630. map. The main difficulty of this problem stems from not having any additional
  631. external aiding information such as GPS. SLAM has been considered one of the
  632. fundamental challenges of robotics. There are many resources on SLAM
  633. [#f9]_. A pose graph optimization problem is one example of a SLAM
  634. problem. The following explains how to formulate the pose graph based SLAM
  635. problem in 2-Dimensions with relative pose constraints.
  636. Consider a robot moving in a 2-Dimensional plane. The robot has access to a
  637. set of sensors such as wheel odometry or a laser range scanner. From these
  638. raw measurements, we want to estimate the trajectory of the robot as well as
  639. build a map of the environment. In order to reduce the computational
  640. complexity of the problem, the pose graph approach abstracts the raw
  641. measurements away. Specifically, it creates a graph of nodes which represent
  642. the pose of the robot, and edges which represent the relative transformation
  643. (delta position and orientation) between the two nodes. The edges are virtual
  644. measurements derived from the raw sensor measurements, e.g. by integrating
  645. the raw wheel odometry or aligning the laser range scans acquired from the
  646. robot. A visualization of the resulting graph is shown below.
  647. .. figure:: slam2d.png
  648. :figwidth: 500px
  649. :height: 400px
  650. :align: center
  651. Visual representation of a graph SLAM problem.
  652. The figure depicts the pose of the robot as the triangles, the measurements
  653. are indicated by the connecting lines, and the loop closure measurements are
  654. shown as dotted lines. Loop closures are measurements between non-sequential
  655. robot states and they reduce the accumulation of error over time. The
  656. following will describe the mathematical formulation of the pose graph
  657. problem.
  658. The robot at timestamp :math:`t` has state :math:`x_t = [p^T, \psi]^T` where
  659. :math:`p` is a 2D vector that represents the position in the plane and
  660. :math:`\psi` is the orientation in radians. The measurement of the relative
  661. transform between the robot state at two timestamps :math:`a` and :math:`b`
  662. is given as: :math:`z_{ab} = [\hat{p}_{ab}^T, \hat{\psi}_{ab}]`. The residual
  663. implemented in the Ceres cost function which computes the error between the
  664. measurement and the predicted measurement is:
  665. .. math:: r_{ab} =
  666. \left[
  667. \begin{array}{c}
  668. R_a^T\left(p_b - p_a\right) - \hat{p}_{ab} \\
  669. \mathrm{Normalize}\left(\psi_b - \psi_a - \hat{\psi}_{ab}\right)
  670. \end{array}
  671. \right]
  672. where the function :math:`\mathrm{Normalize}()` normalizes the angle in the range
  673. :math:`[-\pi,\pi)`, and :math:`R` is the rotation matrix given by
  674. .. math:: R_a =
  675. \left[
  676. \begin{array}{cc}
  677. \cos \psi_a & -\sin \psi_a \\
  678. \sin \psi_a & \cos \psi_a \\
  679. \end{array}
  680. \right]
  681. To finish the cost function, we need to weight the residual by the
  682. uncertainty of the measurement. Hence, we pre-multiply the residual by the
  683. inverse square root of the covariance matrix for the measurement,
  684. i.e. :math:`\Sigma_{ab}^{-\frac{1}{2}} r_{ab}` where :math:`\Sigma_{ab}` is
  685. the covariance.
  686. Lastly, we use a manifold to normalize the orientation in the range
  687. :math:`[-\pi,\pi)`. Specially, we define the
  688. :member:`AngleManifold::Plus()` function to be:
  689. :math:`\mathrm{Normalize}(\psi + \Delta)` and
  690. :member:`AngleManifold::Minus()` function to be
  691. :math:`\mathrm{Normalize}(y) - \mathrm{Normalize}(x)`.
  692. This package includes an executable :member:`pose_graph_2d` that will read a
  693. problem definition file. This executable can work with any 2D problem
  694. definition that uses the g2o format. It would be relatively straightforward
  695. to implement a new reader for a different format such as TORO or
  696. others. :member:`pose_graph_2d` will print the Ceres solver full summary and
  697. then output to disk the original and optimized poses (``poses_original.txt``
  698. and ``poses_optimized.txt``, respectively) of the robot in the following
  699. format:
  700. .. code-block:: bash
  701. pose_id x y yaw_radians
  702. pose_id x y yaw_radians
  703. pose_id x y yaw_radians
  704. where ``pose_id`` is the corresponding integer ID from the file
  705. definition. Note, the file will be sorted in ascending order for the
  706. ``pose_id``.
  707. The executable :member:`pose_graph_2d` expects the first argument to be
  708. the path to the problem definition. To run the executable,
  709. .. code-block:: bash
  710. /path/to/bin/pose_graph_2d /path/to/dataset/dataset.g2o
  711. A python script is provided to visualize the resulting output files.
  712. .. code-block:: bash
  713. /path/to/repo/examples/slam/pose_graph_2d/plot_results.py --optimized_poses ./poses_optimized.txt --initial_poses ./poses_original.txt
  714. As an example, a standard synthetic benchmark dataset [#f10]_ created by
  715. Edwin Olson which has 3500 nodes in a grid world with a total of 5598 edges
  716. was solved. Visualizing the results with the provided script produces:
  717. .. figure:: manhattan_olson_3500_result.png
  718. :figwidth: 600px
  719. :height: 600px
  720. :align: center
  721. with the original poses in green and the optimized poses in blue. As shown,
  722. the optimized poses more closely match the underlying grid world. Note, the
  723. left side of the graph has a small yaw drift due to a lack of relative
  724. constraints to provide enough information to reconstruct the trajectory.
  725. .. rubric:: Footnotes
  726. .. [#f9] Giorgio Grisetti, Rainer Kummerle, Cyrill Stachniss, Wolfram
  727. Burgard. A Tutorial on Graph-Based SLAM. IEEE Intelligent Transportation
  728. Systems Magazine, 52(3):199-222, 2010.
  729. .. [#f10] E. Olson, J. Leonard, and S. Teller, “Fast iterative optimization of
  730. pose graphs with poor initial estimates,” in Robotics and Automation
  731. (ICRA), IEEE International Conference on, 2006, pp. 2262-2269.
  732. #. `slam/pose_graph_3d/pose_graph_3d.cc
  733. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/slam/pose_graph_3d/pose_graph_3d.cc>`_
  734. The following explains how to formulate the pose graph based SLAM problem in
  735. 3-Dimensions with relative pose constraints. The example also illustrates how
  736. to use Eigen's geometry module with Ceres's automatic differentiation
  737. functionality.
  738. The robot at timestamp :math:`t` has state :math:`x_t = [p^T, q^T]^T` where
  739. :math:`p` is a 3D vector that represents the position and :math:`q` is the
  740. orientation represented as an Eigen quaternion. The measurement of the
  741. relative transform between the robot state at two timestamps :math:`a` and
  742. :math:`b` is given as: :math:`z_{ab} = [\hat{p}_{ab}^T, \hat{q}_{ab}^T]^T`.
  743. The residual implemented in the Ceres cost function which computes the error
  744. between the measurement and the predicted measurement is:
  745. .. math:: r_{ab} =
  746. \left[
  747. \begin{array}{c}
  748. R(q_a)^{T} (p_b - p_a) - \hat{p}_{ab} \\
  749. 2.0 \mathrm{vec}\left((q_a^{-1} q_b) \hat{q}_{ab}^{-1}\right)
  750. \end{array}
  751. \right]
  752. where the function :math:`\mathrm{vec}()` returns the vector part of the
  753. quaternion, i.e. :math:`[q_x, q_y, q_z]`, and :math:`R(q)` is the rotation
  754. matrix for the quaternion.
  755. To finish the cost function, we need to weight the residual by the
  756. uncertainty of the measurement. Hence, we pre-multiply the residual by the
  757. inverse square root of the covariance matrix for the measurement,
  758. i.e. :math:`\Sigma_{ab}^{-\frac{1}{2}} r_{ab}` where :math:`\Sigma_{ab}` is
  759. the covariance.
  760. Given that we are using a quaternion to represent the orientation,
  761. we need to use a manifold (:class:`EigenQuaternionManifold`) to
  762. only apply updates orthogonal to the 4-vector defining the
  763. quaternion. Eigen's quaternion uses a different internal memory
  764. layout for the elements of the quaternion than what is commonly
  765. used. Specifically, Eigen stores the elements in memory as
  766. :math:`[x, y, z, w]` where the real part is last whereas it is
  767. typically stored first. Note, when creating an Eigen quaternion
  768. through the constructor the elements are accepted in :math:`w`,
  769. :math:`x`, :math:`y`, :math:`z` order. Since Ceres operates on
  770. parameter blocks which are raw double pointers this difference is
  771. important and requires a different parameterization.
  772. This package includes an executable :member:`pose_graph_3d` that will read a
  773. problem definition file. This executable can work with any 3D problem
  774. definition that uses the g2o format with quaternions used for the orientation
  775. representation. It would be relatively straightforward to implement a new
  776. reader for a different format such as TORO or others. :member:`pose_graph_3d`
  777. will print the Ceres solver full summary and then output to disk the original
  778. and optimized poses (``poses_original.txt`` and ``poses_optimized.txt``,
  779. respectively) of the robot in the following format:
  780. .. code-block:: bash
  781. pose_id x y z q_x q_y q_z q_w
  782. pose_id x y z q_x q_y q_z q_w
  783. pose_id x y z q_x q_y q_z q_w
  784. ...
  785. where ``pose_id`` is the corresponding integer ID from the file
  786. definition. Note, the file will be sorted in ascending order for the
  787. ``pose_id``.
  788. The executable :member:`pose_graph_3d` expects the first argument to be the
  789. path to the problem definition. The executable can be run via
  790. .. code-block:: bash
  791. /path/to/bin/pose_graph_3d /path/to/dataset/dataset.g2o
  792. A script is provided to visualize the resulting output files. There is also
  793. an option to enable equal axes using ``--axes_equal``
  794. .. code-block:: bash
  795. /path/to/repo/examples/slam/pose_graph_3d/plot_results.py --optimized_poses ./poses_optimized.txt --initial_poses ./poses_original.txt
  796. As an example, a standard synthetic benchmark dataset [#f9]_ where the robot is
  797. traveling on the surface of a sphere which has 2500 nodes with a total of
  798. 4949 edges was solved. Visualizing the results with the provided script
  799. produces:
  800. .. figure:: pose_graph_3d_ex.png
  801. :figwidth: 600px
  802. :height: 300px
  803. :align: center