Temporal difference learning with kernels for pricing american-style options
may, 2005
Publication type:
Paper in peer-reviewed journals
Journal:
Optimization Online
External link:
HAL:
Keywords :
TD Learning, Robbins-Monro Algorithm, Kernels
Abstract:
We propose in this paper to study the problem of estimating the cost-to-go function for an infinite-horizon discounted Markov chain with possibly continuous state space. For implementation purposes, the state space is typically discretized. As soon as the dimension of the state space becomes large, the computation is no more practicable, a phenomenon referred to as the curse of dimensionality. The approximation of dynamic programming problems is therefore of major importance. A powerful method for dynamic programming, often referred to as neuro-dynamic programming, consists in representing the Bellman function as a linear combination of a priori defined functions, called neurons. In this article, we propose an alternative approach very similar to temporal differences, based on functional gradient descent and using an infinite kernel basis.Furthermore, our algorithm, though aimed at infinite dimensional problems, is implementable in practice. We prove the convergence of this algorithm, and show applications on e.g. bermudan option pricing.
BibTeX:
@article{Bar-Roy-Str-2005-1, author={Kengy Barty and Jean-Sébastien Roy and Cyrille Strugarek }, title={Temporal difference learning with kernels for pricing american-style options }, journal={Optimization Online }, year={2005 }, month={5}, }