论文标题
衍生定价模型的校准:多代理增强学习观点
Calibration of Derivative Pricing Models: a Multi-Agent Reinforcement Learning Perspective
论文作者
论文摘要
定量融资中最基本的问题之一是存在适合给定一套选择的市场价格的连续时间扩散模型。传统上,人们采用直觉,理论和经验分析的组合来找到实现精确或近似拟合的模型。我们的贡献是展示该问题的合适游戏理论表述如何通过利用现代深层多代理强化学习中的现有发展来解决这个问题,以在随机过程的空间中进行搜索。我们的实验表明,我们能够学习局部波动性,以及在波动性过程中需要的路径依赖性,以最大程度地降低百慕大选项的价格。 Our algorithm can be seen as a particle method \textit{à la} Guyon \textit{et} Henry-Labordere where particles, instead of being designed to ensure $σ_{loc}(t,S_t)^2 = \mathbb{E}[σ_t^2|S_t]$, are learning RL-driven agents cooperating towards more general calibration targets.
One of the most fundamental questions in quantitative finance is the existence of continuous-time diffusion models that fit market prices of a given set of options. Traditionally, one employs a mix of intuition, theoretical and empirical analysis to find models that achieve exact or approximate fits. Our contribution is to show how a suitable game theoretical formulation of this problem can help solve this question by leveraging existing developments in modern deep multi-agent reinforcement learning to search in the space of stochastic processes. Our experiments show that we are able to learn local volatility, as well as path-dependence required in the volatility process to minimize the price of a Bermudan option. Our algorithm can be seen as a particle method \textit{à la} Guyon \textit{et} Henry-Labordere where particles, instead of being designed to ensure $σ_{loc}(t,S_t)^2 = \mathbb{E}[σ_t^2|S_t]$, are learning RL-driven agents cooperating towards more general calibration targets.