Observability, Identifiability and Sensitivity of Vision-Aided Inertial Navigation / 4170
Joshua Hernandez, Konstantine Tsotsos, Stefano Soatto
We analyze the observability of 3-D position and orientation from the fusion of visual and inertial sensors. The model contains unknown parameters, such as sensor biases, and so the problem is usually cast as a mixed filtering/identification problem, with the resulting observability analysis providing necessary conditions for convergence to a unique point estimate. Most models treat sensor bias rates as noise, independent of other states, including biases themselves, an assumption that is violated in practice. We show that, when this assumption is lifted, the resulting model is not observable, and therefore existing analyses cannot be used to conclude that the set of states that are indistinguishable from the measurements is a singleton. We recast the analysis as one of sensitivity: Rather than attempting to prove that the set of indistinguishable trajectories is a singleton, we derive bounds on its volume, as a function of characteristics of the sensor and other sufficient excitation conditions. This provides an explicit characterization of the indistinguishable set that can be used for analysis and validation purposes.