Skip to content
Snippets Groups Projects

FeatureBase: covariance, information, square root information upper matrices

Merged Joan Vallvé Navarro requested to merge compute_upper_sqrt into master

This MR proposes different changes in FeatureBase class concerning the measurement noise (covariance, information and square root upper information):

  1. API coherence: Use of Information in the whole class (before there were mixed Info and Information)
  2. A new getter: getMeasurementInformation() which is not really a getter, it returns the squared squareRootInformationUpper.
  3. Check & ensure symmetry: A preliminary assert with numeric tolerance Constants::EPS to avoid wrong user input. Afterwards, when setting the input matrix it uses Eigen::selfAdjointView which takes only the upper triangle to build an exactly symmetric matrix.
  4. Avoid singular covariance: The previous implementation didn't ensure SDP (it was a simple if condition to regularize adding a small constant diagonal matrix). Now the new function avoidSingularCovariance() uses a while which regularize the matrix by adding a constant diagonal matrix with an increasing value until SDP matrix.
  5. computeSqrtUpper() computes sqrt upper: The previous implementation computed the sqrt upper of the inverse of the input, now it does what its name says. So now its input is the information matrix covariance.inverse().
  6. New implementation of computeSqrtUpper(): Now it works also for (close to) SDP information matrices for which Eigen::LLT produces surprisingly bad factorizations.

Open issue: What SDP means?

  • Which test do we perform to decide it, determinant or eigen decomposition?
  • Which tolerances should we use? In case of information and covariance the answers can be different.

@artivis proposed using wolf.h functions: isSymmetric(), isPositiveSemiDefinite(), isCovariance(). I will do it for sure, however I would like to discuss about the previous two questions and modify their implementation if it's the case.

WIP NOTE: Regularization of not SDP matrices is not working properly. Actually, the current implementation was not working either. Adding assert(measurement_covariance_.determinant() > Constants::EPS_SMALL) after regularizing the matrix, some tests didn't pass. Maybe we should move from adding a diagonal matrix to altering the Eigen decomposition.

Edited by Joan Solà Ortega

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
89 89
90 90 void FeatureBase::setMeasurementCovariance(const Eigen::MatrixXs & _meas_cov)
91 91 {
92 if (_meas_cov.determinant() < Constants::EPS_SMALL)
93 {
94 Eigen::MatrixXs eps = Eigen::MatrixXs::Identity(_meas_cov.rows(), _meas_cov.cols()) * 1e-10;
95 measurement_covariance_ = _meas_cov + eps; // avoid singular covariance
96 }
97 else
98 measurement_covariance_ = _meas_cov;
92 // Check symmetry (soft)
93 assert((_meas_cov - _meas_cov.transpose()).cwiseAbs().maxCoeff() < Constants::EPS && "Not symmetric measurement covariance");
  • added 1 commit

    Compare with previous version

  • "Maybe we should move from adding a diagonal matrix to altering the Eigen decomposition."

    +1

    We can do so. Or also, we can add some small term to the Null space of the Singular matrix:

    N = Q.null(); Q = Q + N * eps * N.transpose(); // or something similar, I forgot the way to do this
  • Some more insights regarding your first SDP-related question, notice that the macro WOLF_ASSERT_COVARIANCE_MATRIX actually asserts both the determinant and Eigen decomposition.
    As both tests are enclosed in asserts, I would say that the computation cost in not a problem as it disappears if compiled in release.

  • If we check the eigen values, I don't know why should we check the determinant..

  • About the null space strategy.. Are you considering that it has dimension 1, right? I think it is equivalent as altering the zeros eigen values (what about the negative ones?) and the kernel() function is also member of Eigen's Linear Algebra Decompositions class.. so it also performs the decomposition. I don't see the benefit..

    Edited by Joan Vallvé Navarro
  • @joanvallve, the macro test first the determinant as I assumed it was less computationally expensive. Indeed it then double check for symmetry first then SDP...

    In case there are different tolerances whether we check information or covariance, I'd simply have two different function/macro explicitly named, to do so.

  • added 1 commit

    • 1a95699e - Changed avoidSingularCovariance

    Compare with previous version

  • @artivis I also think determinant is less computationally expensive. But if it fails, you didn't need efficiency, and if it doesn't fail, you'll check eigen decomposition anyway.. right?

  • joan vallve you are surely right about the kernel and decomposition things.

    Jeremie calling the determinant is going to call internally a decomposition, so there is no really gain here.

    I think we should do something to avoid ill situations. But this must be done only in one place and clearly identifiable. Now, if you follow the code pipeline, there are two equivalent if() testing for the same thing at two different places, but with two different epsilon values! They also correct by adding a diagonal constant, with two different constant values!

    Finally, I tend to vote for removing the correction and having only an assert. It should be the responsibility of all Processors to provide proper data to the solver. However, we tried this and when errors occur it is not easy to boild down the problem. So some time ago we re-introduced this correction. I insist, however, the correction should be in only ONE place --> In FeatureBase, for sure, but also in one single function, that should be called probably only once.

    Edited by Joan Solà Ortega
  • added 1 commit

    • 9f45eb28 - Imposing SDP+sym input instead of fixing it

    Compare with previous version

  • Sorry for having brought up 'efficiency', again, the macro uses plain assert() so that once compiled in release, it basically disappears.

  • Jeremie Deray
  • added 1 commit

    • f43f9f11 - not using avoidSingularCovariance

    Compare with previous version

  • I agree with @artivis that feature_base should only have asserts and should not modify the user input (apart from numerical issues).

    However, if I didn't do that, some tests simply didn't pass. And a MR should pass all the tests..

    Last pushed commit imposes an SDP+symmetric user input using 2 different assertions defined in wolf.h:

    • WOLF_ASSERT_COVARIANCE_MATRIX (slightly modified) Uses Constants::EPS_SMALL as the minimum eigen value allowed.
    • WOLF_ASSERT_INFORMATION_MATRIX Uses 0 as the minimum eigen value allowed.

    For sure, these values are under discussion.

    With this result:

    The following tests FAILED: 16 - gtest_track_matrix (OTHER_FAULT) 23 - gtest_feature_IMU (OTHER_FAULT) 26 - gtest_processor_frame_nearest_neighbor_filter_2D (Failed)

  • I've fixed the gtest_track_matrix test. I think gtest_processor_frame_nearest_neighbor_filter_2D is also fixable because I see a covariance fixed to 0 but I don't understand what is the test doing.. (who is tessajohanna?)

    But about the IMU... we should change the processor, right?

    Edited by Joan Vallvé Navarro
  • added 2 commits

    Compare with previous version

  • @joanvallve Tessa Johanna was an intern at PAL. She wrote the loop-closure-related processor, including the one you are mentioning, together with the tests.

  • Joan Vallvé Navarro
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading