Coordinated learning by model difference identification in multiagent systems with sparse interactions (Q1677725): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Set OpenAlex properties.
(3 intermediate revisions by 3 users not shown)
Property / Wikidata QID
 
Property / Wikidata QID: Q59123444 / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1155/2016/3207460 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2531204978 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Revision as of 03:08, 20 March 2024

scientific article
Language Label Description Also known as
English
Coordinated learning by model difference identification in multiagent systems with sparse interactions
scientific article

    Statements

    Coordinated learning by model difference identification in multiagent systems with sparse interactions (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    13 November 2017
    0 references
    Summary: Multiagent Reinforcement Learning (MARL) is a promising technique for agents learning effective coordinated policy in Multiagent Systems (MASs). In many MASs, interactions between agents are usually sparse, and then a lot of MARL methods were devised for them. These methods divide learning process into independent learning and joint learning in coordinated states to improve traditional joint state-action space learning. However, most of those methods identify coordinated states based on assumptions about domain structure (e.g., dependencies) or agent (e.g., prior individual optimal policy and agent homogeneity). Moreover, situations that current methods cannot deal with still exist. In this paper, a modified approach is proposed to learn where and how to coordinate agents' behaviors in more general MASs with sparse interactions. Our approach introduces sample grouping and a more accurate metric of model difference degree to identify which states of other agents should be considered in coordinated states, without strong additional assumptions. Experimental results show that the proposed approach outperforms its competitors by improving the average agent reward per step and works well in some broader scenarios.
    0 references
    0 references
    0 references
    0 references