A new learning algorithm for optimal stopping
From MaRDI portal
Publication:839001
DOI10.1007/s10626-008-0055-2zbMath1168.91356OpenAlexW2093448583MaRDI QIDQ839001
Jervis Pinto, Vivek S. Borkar, Tarun Prabhu
Publication date: 1 September 2009
Published in: Discrete Event Dynamic Systems (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10626-008-0055-2
Discrete-time Markov processes on general state spaces (60J05) Learning and adaptive systems in artificial intelligence (68T05) Linear programming (90C05) Microeconomic theory (price theory and economic markets) (91B24)
Related Items
Approximate dynamic programming via direct search in the space of value function approximations, Multiobjective Stopping Problem for Discrete-Time Markov Processes: Convex Analytic Approach
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A generalized Kalman filter for fixed point approximation and efficient temporal-difference learning
- Stochastic approximation with two time scales
- An actor-critic algorithm for constrained Markov decision processes
- A note on linear function approximation using random projections
- Adaptive Importance Sampling Technique for Markov Chains Using Stochastic Approximation
- The Linear Programming Approach to Approximate Dynamic Programming
- Pricing American Options: A Duality Approach
- OnActor-Critic Algorithms
- Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives
- Linear Programming Formulation for Optimal Stopping Problems
- Monte Carlo valuation of American options
- Average Optimality in Markov Control Processes via Discounted-Cost Problems and Linear Programming
- The O.D.E. Method for Convergence of Stochastic Approximation and Reinforcement Learning
- Valuing American Options by Simulation: A Simple Least-Squares Approach
- On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming