Functional Approximations and Dynamic Programming

From MaRDI portal
Publication:3273603

DOI10.2307/2002797zbMath0095.34403OpenAlexW4247446124MaRDI QIDQ3273603

Stuart E. Dreyfus, Richard Bellman

Publication date: 1959

Published in: Mathematical Tables and Other Aids to Computation (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.2307/2002797




Related Items (27)

A review of stochastic algorithms with continuous value function approximation and some new approximate policy iteration algorithms for multidimensional continuous applicationsAdaptive importance sampling for control and inferenceHybrid functions of Bernstein polynomials and block-pulse functions for solving optimal control of the nonlinear Volterra integral equationsPerspectives of approximate dynamic programmingA generalized Kalman filter for fixed point approximation and efficient temporal-difference learningFeature-based methods for large scale dynamic programmingLarge-Scale Loan Portfolio SelectionUnnamed ItemPolicy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classesTotally model-free actor-critic recurrent neural-network reinforcement learning in non-Markovian domainsImproving reinforcement learning algorithms: Towards optimal learning rate policiesDynamic programming and value-function approximation in sequential decision problems: error analysis and numerical resultsSymmetry reduction for dynamic programmingA unified framework for stochastic optimizationApproximate dynamic programming for stochastic \(N\)-stage optimization with application to optimal consumption under uncertaintyUnnamed ItemOperation of storage reservoir for water quality by using optimization and artificial intelligence techniquesDecomposition of large-scale stochastic optimal control problemsValuing portfolios of interdependent real options using influence diagrams and simulation-and-regression: a multi-stage stochastic integer programming approachEmpirical Dynamic ProgrammingWhat you should know about approximate dynamic programmingLearning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample pathSuboptimal Policies for Stochastic $$N$$-Stage Optimization: Accuracy Analysis and a Case Study from Optimal ConsumptionAn application of approximate dynamic programming in multi-period multi-product advertising budgetingUsing OPTRANS object as a KB-DSS development environment for designing DSS for production managementNatural actor-critic algorithmsOn the existence of fixed points for approximate value iteration and temporal-difference learning




This page was built for publication: Functional Approximations and Dynamic Programming