Option Pricing Techniques

Using Neural Networks

More Info
expand_more

Abstract

With the emergence of more complex option pricing models, the demand for fast and accurate numerical pricing techniques is increasing. Due to a growing amount of accessible computational power, neural networks have become a feasible numerical method for approximating solutions to these pricing models. This work concentrates on analysing various neural network architectures on option pricing optimisation problems in a supervised and semi-supervised learning setting. We compare the mean-squared error (MSE) and computational training time of a multilayer perceptron (MLP), highway architecture and a recently developed DGM network (Sirignano et al., 2018) along with slight variations on the Black-Scholes and Heston European call option pricing problem as well as the implied volatility problem. We find that on nearly all the supervised learning problems, the generalised highway architecture outperforms its counterparts in terms of MSE relative to computation time. On the Black-Scholes problem, we noticed a reduction of 9.8% in MSE for the generalised highway network while containing 96.2% fewer parameters compared to the MLP considered in (Liu et al., 2019).

On the semi-supervised learning problem, where we directly optimise the neural network to fit the partial differential equation (PDE) and boundary/initial conditions, we concluded that the network architecture of the DGM allows for optimisation of both the interior condition as well as the non-smooth terminal condition. As this was not the case for the MLP and highway networks, the DGM network turned out to be the best performing network architecture on the semi-supervised learning problems. Additionally, we found indications that on the semi-supervised learning problem the performance of the DGM network remained consistent when increasing the dimensionality of the problem.