This thesis is inspired by Active Inference to contribute to its improvement in the Robotics work field. However, the results and applications of this thesis are useful in a broader perspective, namely in any field that makes use of derivatives and the forecasting of a time-serie
...
This thesis is inspired by Active Inference to contribute to its improvement in the Robotics work field. However, the results and applications of this thesis are useful in a broader perspective, namely in any field that makes use of derivatives and the forecasting of a time-series signal. The goal of this study is to determine a new approach for calculating the derivatives using the finite difference technique and forecasting. The finite-difference technique is based on the Taylor expansion. The Taylor expansion uses derivatives to calculate a future signal sample, while the finite difference technique uses signal samples to estimate derivatives. The accuracy of the estimation can theoretically be increased using more signal samples and choosing signal samples that are closer to the signal sample of interest. For these reasons, the central method is interesting as it maximizes both of these properties. The central method uses future signal samples for the sample of interest which are not available in an online setting. Future values can be forecast through for example statistical models and predictive coding. Statistical models such as the auto regression method are the golden standard but cannot learn in an online fashion. Predictive coding can learn in an online fashion and research found a promising branch utilizing neural network frameworks. PredNet came out as an interesting model due to its state-of-the-art results and flexibility. The research question in this thesis is as follows: can predicting future data points by use of PredNet help improve the accuracy of the generalized coordinates?
Two models of PredNet are trained on data obtained from random commands from a simulated Jackal robot. One is trained in an online fashion and one in an offline fashion. Its predictions are compared against two other forecasting methods: copying the input and the auto regression method. The predictions are then used with the finite difference method to estimate derivatives. Four methods are used: a method using as little past data as possible, a method using as much past data as possible, a method using as little centralized data as possible and a method using as much centralized data as possible. Derivatives are determined on the same simulated Jackal data and are compared against a derivative obtained from a central method using as much data as possible that all come from the true signal.
Results show no forecasting difference between the two types of PredNet models, but there are differences in training speed: training in an online fashion is not fast enough to keep up with the data stream. This suggests that the quality of neural networks is good enough for online applications, but that more work needs to be done on optimizing neural networks for online learning to make it a practical solution. However, the PredNet predictions are not as accurate as the auto regression method. This means that neural networks can still learn from the auto regression method when used for forecasting time series. An example of improving neural networks is making hybrid models.
Using the PredNet forecasts for estimating derivatives did not increase the accuracy of the derivatives meaning the predictions are not accurate enough. Besides that, using more samples, either from the past or the future, did not increase the derivative estimate either. Most of this outcome can be attributed to the choice of data set which contained many transitions between commands which could not be predicted by the PredNet models or be seen by the finite difference method.
Suggestions for improvements are to focus the training of a model on either the transition period or the period in between, using more modern models and/or using different methods for estimating derivatives.
This work concludes that neural networks show promise for online applications, that training a model requires a specific data set or a complex enough model to utilize a more general data set, and that using more samples with the finite difference method is not guaranteed to provide more accurate estimates of derivatives.