Simulation of deep-space autonomous Line-of-Sight navigation using synthetic images in the loop
More Info
expand_more
Abstract
Autonomous Line-of-Sight navigation represents an appealing technique that can be exploited by deep-space spacecraft, particularly miniaturized, to estimate the state during cruising. It is based on the observation of visible bodies' directions, processed onboard to estimate the spacecraft' 6D heliocentric state. Its applicability has been preliminarily investigated by feeding the navigation filter with simplified measurements, simulated by adding a certain noise on the actual direction, based on star-tracker characteristics. However, while this approach is convenient and appropriate for a preliminary study, it is not sufficient to dive into the characteristics and performance of the method, and later on to definitely prove its applicability to real missions. This is because the measurement error cannot be considered relying exclusively on the star tracker's characteristics, as the observation scenario (e.g. planet apparent size, illumination condition, stars background) does play an important role in the measurement error. For this reason, in this work, we include image processing in the simulation loop. First, we define how to simulate realistic and reliable synthetic space images as a function of hardware characteristics and observation scenarios; then we use the generated images within the simulation to compute the measurements. Thanks to this approach, it is also possible to further improve the navigation filter design. In fact, we developed an Adaptive Extended Kalman Filter to cope with variable measurement errors and dynamics conditions. This filter allows the automatic tuning of both the process noise covariance and the measurement noise matrices as a function of the scenario. With this work, we add two important pieces to the road map for fully autonomous deep-space spacecraft: performance evaluation refinement including image-processing, and design of AEKF for the technique.