Mean-Semivariance Policy Optimization via Risk-Averse Reinforcement Learning (Q5870485): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Changed an Item
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Risk-Sensitive Optimal Control for Markov Decision Processes with Monotone Cost / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5425954 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multi-period mean-semivariance portfolio optimization based on uncertain measure / rank
 
Normal rank
Property / cites work
 
Property / cites work: Risk-Constrained Reinforcement Learning with Percentile Risk Criteria / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mean-Variance Tradeoffs in an Undiscounted MDP: The Unichain Case / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variance-Penalized Markov Decision Processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5744808 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variance-penalized Markov decision processes: dynamic programming and reinforcement learning techniques / rank
 
Normal rank
Property / cites work
 
Property / cites work: Risk-Sensitive Markov Decision Processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Dynamic Portfolio Selection: Multiperiod Mean-Variance Formulation / rank
 
Normal rank
Property / cites work
 
Property / cites work: A multi-period fuzzy portfolio optimization model with minimum transaction lots / rank
 
Normal rank
Property / cites work
 
Property / cites work: Computation of mean-semivariance efficient sets by the critical line algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convex Approximations of Chance Constrained Programs / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variance-constrained actor-critic algorithms for discounted and average reward MDPs / rank
 
Normal rank
Property / cites work
 
Property / cites work: Risk-averse dynamic programming for Markov decision processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Lectures on Stochastic Programming: Modeling and Theory, Third Edition / rank
 
Normal rank
Property / cites work
 
Property / cites work: The variance of discounted Markov decision processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4626283 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sequential Decision Making With Coherent Risk / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mean-semivariance optimality for continuous-time Markov decision processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimization of Markov decision processes under the variance criterion / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multi-period semi-variance portfolio selection: model and numerical solution / rank
 
Normal rank
Property / cites work
 
Property / cites work: A possibilistic mean-semivariance-entropy model for multi-period portfolio selection with transaction costs / rank
 
Normal rank

Latest revision as of 06:13, 31 July 2024

scientific article; zbMATH DE number 7639805
Language Label Description Also known as
English
Mean-Semivariance Policy Optimization via Risk-Averse Reinforcement Learning
scientific article; zbMATH DE number 7639805

    Statements

    Mean-Semivariance Policy Optimization via Risk-Averse Reinforcement Learning (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    9 January 2023
    0 references
    reinforcement learning
    0 references
    Markov decision processes
    0 references
    planning
    0 references
    machine learning
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references