At several point of the codebase, the complete state of a frame (or Landmark, or capture -> any object inheriting from HasStateBlock) needs to be passed. Examples include Problem::getState(Timestamp ts) where it represents the state estimation at a given timestamp, the FrameBase::FrameBase(...) constructor where is represents an initial value etc. This state is passed as a Eigen::VectorXd, assuming that the receiver "knows" to what structure it refers (PO, POV, POVCDL and not PVO for instance). This is more than enough for cases where this structure is obvious and we currently enforce it to be either PO or POV at the problem creation. I have struggled to adapt this model to the more complex case of multiple processorMotions acting on different states because in this case:
processorMotion each return only a part of the estimated state
their inner propagated frame is created by initializing only the corresponding stateblocks
I would argue that using a data structure describing by itself the structure of the state being passed whenever something else than a single stateblock value is passed would make our life easier. Building on the ideas of HasStateBlock, I think that using a std::unordered_map<std::string, Eigen::VectorXd> representing a dictionnary of {state_block_name, sb_value} would be a good candidate.
For instance, Eigen::VectorXd Problem::getCurrentState() would become const std::unordered_map<std::string, Eigen::VectorXd>& Problem::getCurrentState()
Even in the more normal usual setting, we never (to my knowledge) do computation using this full state vector, rather retrieving segments of it to do calculations. It would also make things easier to handle for end users instead of having to write things like:
classState{private:std::unordered_map<std::string,Eigen::VectorXd>state_structure_;// containerpublic:State(StateBlockMap_map);// constructor from the map in HasStateBlocksVectorXdblock(conststd::string&_key)const;// get vector of state block '_key'VectorXdvector(conststd::string&_struct)const;// get a full state vector // according to structure '_struct'}
This Issue of changing the states from Eigen::Vector/Matrix to wolf::Vector/Block/MatrixComposite is getting huge:
Created the following types
VectorComposite
MatrixComposite
StateBlockComposite
Used in
HasStateBlocks
Adapting processor motion things:
IsMotion
ProcessorMotion
ProcessorMotion_Derived
This means:
Changing Motion, MotionBuffer
Changing the definition of the Calibration vector. This affects:
CaptureBase
CaptureMotion
Also, since we are operating algebraically on these vecs and mats, I overloaded a few operators:
operator +, -, *
operator <<
However, more operators might be needed. To be fair, we should overload mostly all operators...
operator +=, *=, -=
This soon becomes ridiculous if we want to be operative.
In addition, at the time of making a KF, features need to be created. Since features have Eigen::Vec/Mat data, we have to convert wolf::Vec/MatComposite to Eigen::Vec/Mat. To create full vecs and matrices, we need again to consider the order of the blocks, so we need
MatrixCompositeQ;// this is a block-symmetric matrixMatrixCompositeJ;// this is a Jacobian matrix compatible with the structure of QMatrixCompositeS=Q.propagate(J);// this is S = J * Q * J.transpose() !!!
Very cool! So MatrixComposite::propagate makes the assumption that the corresponding object is symmetric positive right? Or are all MatrixComposite supposed to be symmetric positive?
Ok I just though that MatrixComposite MatrixComposite::propagate was taking advantage of the fact the the matrix called upon was symmetric but you seem to be telling me it's not?
What about * S[j][i] = S_ij^T (9) // write symmetrical block in output composite?
You just multiply matrices. If they don't fit, it's an error. So you have to provide proper matrices otherwise it does not work. But the algorithm does not assume anything, it just makes the product.
It's like Eigen's A*B. If A.rows() != B.cols() it fails! But the operator '*' does not assume anything.
Yes I was only talking about propagate, I just found it a bit strange to have a method that doesn't apply to all instances of its class. In the first snippet you posted, the this was a covariance matrix and the argument was a Jacobian. Now you changed it to the opposit which is better I think.
And semantically Qpropa = Jac.propagate(Q) makes sense.
/** * \brief Propagate a given covariances matrix, with 'this' being a Jacobian matrix * * This function computes the product J * Q * J.transpose. * * It considers that this is a Jacobian matrix, and that the provided matrix * is a proper covariance (e.g. symmmetric and semi-positive). * It also considers that the Jacobian blocks are compatible with the blocks * of the provided covariance, both in their structure and their individual sizes. * * If these conditions are not satisfied, the product will fail and throw, * or give aberrant results at best. * * For example: Let us call 'this' a Jacobian matrix with the name 'J'. * * This Jacobian 'J' maps the "PO" blocks into "VW": * J["V"]["P"] J["V"]["O"] * J["W"]["P"] J["W"]["O"] * * The provided matrix is a covariances matrix Q. * Q has blocks "P", "O" in both vertical and horizontal arrangements: * Q["P"]["P"] Q["P"]["O"] * Q["O"]["P"] Q["O"]["O"] * * Also, assume that the sizes of the blocks "P", "O" are the same in Q and J. * * Now, upon a call to * * MatrixComposite S = J.propagate(Q); * * the result is a covariances matrix S with blocks "V" and "W" * S["V"]["V"] S["V"]["W"] * S["W"]["V"] S["W"]["W"] */MatrixCompositeMatrixComposite::propagate(constMatrixComposite&_Cov){// simplify names of operandsconstauto&J=*this;constauto&Q=_Cov;MatrixCompositeS;// S = J * Q * J.tr/* Covariance propagation * * 1. General formula for the output block S(i,j) * * S_ij = sum_n (J_in * sum_k(Q_nk * J_jk^T)) (A) * * which develops as * * S_ij = sum_n (J_in * QJt_nj) * * with: * * QJt_nj = sum_k (Q_nk * J_jk^T) (B) * * * 2. Algorithm to accomplish the above: * * for i = 1 : S.rows = J.rows (1) // iterate for i and j * { * J_i = J[i] * for j = i : S.cols = J.rows (2) // j starts at i: do not loop over the symmetrical blocks * { * S_ij = 0 (3) // start formula (A) for S_ij * for n = 1 : Q.rows (4) * { * J_in = J[i][n] * QJt_nj = 0 (5) // start formula (B) for QJt_nj * for k = 1 : Q.cols (6) * { * J_jk = J[j][k] * QJt_nj += Q_nk * J_jk^T (7) // sum on QJt_nj * } * S_ij += J_in * QJt_nj (8) // sum on S_ij * } * S[i][j] = S_ij // write block in output composite * if (i != j) * S[j][i] = S_ij^T (9) // write symmetrical block in output composite * } * } */// Iterate on the output matrix S first, row i, col j.for(constauto&pair_i_Si:J)// (1){constauto&i=pair_i_Si.first;constauto&J_i=pair_i_Si.second;doublei_size=J_i.begin()->second.rows();// WOLF_DEBUG("i: ", i, " - size: ", i_size);for(constauto&pair_j_Sj:J)// (2){constauto&j=pair_j_Sj.first;constauto&J_j=pair_j_Sj.second;doublej_size=J_j.begin()->second.rows();// WOLF_DEBUG(" j: ", j, " - size: ", j_size);MatrixXdS_ij(MatrixXd::Zero(i_size,j_size));// (3)for(constauto&pair_n_Qn:Q)// (4){constauto&n=pair_n_Qn.first;constauto&Q_n=pair_n_Qn.second;doublen_size=Q_n.begin()->second.rows();constauto&J_in=J_i.at(n);MatrixXdQJt_nj(MatrixXd::Zero(n_size,j_size));// (5)// WOLF_DEBUG(" n : ", n, " - size: ", n_size);// WOLF_DEBUG(" J_in : (", J_in.rows(), ",", J_in.cols(), ")");// WOLF_DEBUG(" QJt_nj: (", QJt_nj.rows(), ",", QJt_nj.cols(), ")");for(constauto&pair_k_Qnk:Q_n)// (6){constauto&k=pair_k_Qnk.first;constauto&Q_nk=pair_k_Qnk.second;constauto&J_jk=J_j.at(k);QJt_nj+=Q_nk*J_jk.transpose();// (7)// double k_size = Q_nk.cols();// WOLF_DEBUG(" k : ", k, " - size: ", k_size);// WOLF_DEBUG(" Q_nk: (", Q_nk.rows(), ",", Q_nk.cols(), ")");// WOLF_DEBUG(" J_jk: (", J_jk.rows(), ",", J_jk.cols(), ")");}S_ij+=J_in*QJt_nj;// (8)}S.emplace(i,j,S_ij);if(j==i)break;// avoid computing the symmetrical block!elseS.emplace(j,i,S.at(i,j).transpose());// (9)}}returnS;}