gtsam/gtsam/nonlinear/doc/NonlinearFactor.ipynb

85 lines
3.5 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "381ccaaa",
"metadata": {},
"source": [
"# NonlinearFactor\n",
"\n",
"## Overview\n",
"\n",
"The `NonlinearFactor` class in GTSAM is a fundamental component used in nonlinear optimization. It represents a factor in a factor graph. The class is designed to work with nonlinear, continuous functions."
]
},
{
"cell_type": "markdown",
"id": "94ffa16d",
"metadata": {},
"source": [
"## Mathematical Formulation\n",
"\n",
"The `NonlinearFactor` is generally represented by a function $f(x)$, where $x$ is a vector of variables. The error is given by:\n",
"$$\n",
"e(x) = f(x)- z\n",
"$$\n",
"where $z$ is the observed measurement. The optimization process aims to minimize the sum of squared errors:\n",
"$$\n",
"\\min_x \\sum_i \\| e_i(x) \\|^2 \n",
"$$\n",
"\n",
"Linearization involves approximating $f(x)$ around a point $x_0$:\n",
"$$\n",
"f(x) \\approx f(x_0) + A\\delta x\n",
"$$\n",
"where $A$ is the Jacobian matrix of $f$ at $x_0$, and $\\delta x \\doteq x - x_0$. This leads to a linearized error:\n",
"$$\n",
"e(x) \\approx (f(x_0) + A\\delta x) - z = A\\delta x - b\n",
"$$\n",
"where $b\\doteq z - f(x_0)$ is the prediction error."
]
},
{
"cell_type": "markdown",
"id": "e3842ba3",
"metadata": {},
"source": [
"## Key Functionalities\n",
"\n",
"### Error Calculation\n",
"\n",
"- **evaluateError**: This method computes the error vector for the factor given a set of variable values. The error is typically the difference between the predicted measurement and the actual measurement. The function can also return the Jacobian matrices if needed, which are crucial for optimization algorithms like Gauss-Newton or Levenberg-Marquardt.\n",
"\n",
"### Jacobian and Hessian\n",
"\n",
"- **linearize**: This method linearizes the nonlinear factor around a linearization point. It returns a `GaussianFactor`, which is an approximation of the `NonlinearFactor` using a first-order Taylor expansion. This is a critical step in iterative optimization methods, where the problem is repeatedly linearized and solved.\n",
"\n",
"### Active Flag\n",
"\n",
"- **active**: This function checks whether the factor should be included in the optimization process. A factor might be inactive if it does not contribute to the error, which can occur in cases of conditional constraints or gating functions.\n",
"\n",
"### Dimensionality\n",
"\n",
"- **dim**: Returns the dimensionality of the factor, which corresponds to the size of the error vector. This is important for understanding the contribution of the factor to the overall optimization problem.\n",
"\n",
"### Key Management\n",
"\n",
"- **keys**: Provides access to the keys (or variable indices) involved in the factor. This is essential for understanding which variables the factor is connected to in the factor graph.\n",
"\n",
"## Usage Notes\n",
"\n",
"- The `NonlinearFactor` class is typically used in conjunction with a `NonlinearFactorGraph`, which is a collection of such factors.\n",
"- Users need to implement the `evaluateError` method in derived classes to define the specific measurement model.\n",
"- The class is designed to be flexible and extensible, allowing for custom factors to be created for specific applications."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}