FeatureBase: covariance, information, square root information upper matrices
This MR proposes different changes in FeatureBase
class concerning the measurement noise (covariance, information and square root upper information):
-
API coherence: Use of
Information
in the whole class (before there were mixedInfo
andInformation
) -
A new getter:
getMeasurementInformation()
which is not really a getter, it returns the squaredsquareRootInformationUpper
. -
Check & ensure symmetry: A preliminary assert with numeric tolerance
Constants::EPS
to avoid wrong user input. Afterwards, when setting the input matrix it usesEigen::selfAdjointView
which takes only the upper triangle to build an exactly symmetric matrix. -
Avoid singular covariance: The previous implementation didn't ensure SDP (it was a simple
if
condition to regularize adding a small constant diagonal matrix). Now the new functionavoidSingularCovariance()
uses awhile
which regularize the matrix by adding a constant diagonal matrix with an increasing value until SDP matrix. -
computeSqrtUpper()
computes sqrt upper: The previous implementation computed the sqrt upper of the inverse of the input, now it does what its name says. So now its input is the information matrixcovariance.inverse()
. -
New implementation of
computeSqrtUpper()
: Now it works also for (close to) SDP information matrices for whichEigen::LLT
produces surprisingly bad factorizations.
Open issue: What SDP means?
- Which test do we perform to decide it, determinant or eigen decomposition?
- Which tolerances should we use? In case of information and covariance the answers can be different.
@artivis proposed using wolf.h
functions: isSymmetric()
, isPositiveSemiDefinite(), isCovariance()
. I will do it for sure, however I would like to discuss about the previous two questions and modify their implementation if it's the case.
WIP NOTE: Regularization of not SDP matrices is not working properly. Actually, the current implementation was not working either. Adding assert(measurement_covariance_.determinant() > Constants::EPS_SMALL)
after regularizing the matrix, some tests didn't pass. Maybe we should move from adding a diagonal matrix to altering the Eigen decomposition.
Merge request reports
Activity
89 89 90 90 void FeatureBase::setMeasurementCovariance(const Eigen::MatrixXs & _meas_cov) 91 91 { 92 if (_meas_cov.determinant() < Constants::EPS_SMALL) 93 { 94 Eigen::MatrixXs eps = Eigen::MatrixXs::Identity(_meas_cov.rows(), _meas_cov.cols()) * 1e-10; 95 measurement_covariance_ = _meas_cov + eps; // avoid singular covariance 96 } 97 else 98 measurement_covariance_ = _meas_cov; 92 // Check symmetry (soft) 93 assert((_meas_cov - _meas_cov.transpose()).cwiseAbs().maxCoeff() < Constants::EPS && "Not symmetric measurement covariance"); @joanvallve FYI there are a couple helper function in
wolf.h
:isSymmetric
isPositiveSemiDefinite
isCovariance
Together with a macro
WOLF_ASSERT_COVARIANCE_MATRIX
.
- Resolved by Joan Solà Ortega
Some more insights regarding your first SDP-related question, notice that the macro
WOLF_ASSERT_COVARIANCE_MATRIX
actually asserts both the determinant and Eigen decomposition.
As both tests are enclosed inassert
s, I would say that the computation cost in not a problem as it disappears if compiled in release.About the null space strategy.. Are you considering that it has dimension 1, right? I think it is equivalent as altering the zeros eigen values (what about the negative ones?) and the
kernel()
function is also member of Eigen's Linear Algebra Decompositions class.. so it also performs the decomposition. I don't see the benefit..Edited by Joan Vallvé Navarro@joanvallve, the macro test first the determinant as I assumed it was less computationally expensive. Indeed it then double check for symmetry first then SDP...
In case there are different tolerances whether we check information or covariance, I'd simply have two different function/macro explicitly named, to do so.
@artivis I also think determinant is less computationally expensive. But if it fails, you didn't need efficiency, and if it doesn't fail, you'll check eigen decomposition anyway.. right?
joan vallve you are surely right about the kernel and decomposition things.
Jeremie calling the determinant is going to call internally a decomposition, so there is no really gain here.
I think we should do something to avoid ill situations. But this must be done only in one place and clearly identifiable. Now, if you follow the code pipeline, there are two equivalent
if()
testing for the same thing at two different places, but with two different epsilon values! They also correct by adding a diagonal constant, with two different constant values!Finally, I tend to vote for removing the correction and having only an assert. It should be the responsibility of all Processors to provide proper data to the solver. However, we tried this and when errors occur it is not easy to boild down the problem. So some time ago we re-introduced this correction. I insist, however, the correction should be in only ONE place --> In FeatureBase, for sure, but also in one single function, that should be called probably only once.
Edited by Joan Solà Ortega- Resolved by Joan Solà Ortega
I agree with @artivis that
feature_base
should only have asserts and should not modify the user input (apart from numerical issues).However, if I didn't do that, some tests simply didn't pass. And a MR should pass all the tests..
Last pushed commit imposes an SDP+symmetric user input using 2 different assertions defined in
wolf.h
:-
WOLF_ASSERT_COVARIANCE_MATRIX
(slightly modified) UsesConstants::EPS_SMALL
as the minimum eigen value allowed. -
WOLF_ASSERT_INFORMATION_MATRIX
Uses 0 as the minimum eigen value allowed.
For sure, these values are under discussion.
With this result:
The following tests FAILED: 16 - gtest_track_matrix (OTHER_FAULT) 23 - gtest_feature_IMU (OTHER_FAULT) 26 - gtest_processor_frame_nearest_neighbor_filter_2D (Failed)
-
I've fixed the
gtest_track_matrix
test. I thinkgtest_processor_frame_nearest_neighbor_filter_2D
is also fixable because I see a covariance fixed to 0 but I don't understand what is the test doing.. (who is tessajohanna?)But about the IMU... we should change the processor, right?
Edited by Joan Vallvé Navarroadded 2 commits
@joanvallve Tessa Johanna was an intern at PAL. She wrote the loop-closure-related processor, including the one you are mentioning, together with the tests.
- Resolved by Joan Solà Ortega