{"entities":{"Q474272":{"pageid":476039,"ns":120,"title":"Item:Q474272","lastrevid":62092359,"modified":"2026-04-11T03:42:52Z","type":"item","id":"Q474272","labels":{"en":{"language":"en","value":"Multiagent reinforcement learning with regret matching for robot soccer"}},"descriptions":{"en":{"language":"en","value":"scientific article; zbMATH DE number 6372722"}},"aliases":{},"claims":{"P31":[{"mainsnak":{"snaktype":"value","property":"P31","hash":"fd5912e4dab4b881a8eb0eb27e7893fef55176ad","datavalue":{"value":{"entity-type":"item","numeric-id":56887,"id":"Q56887"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","id":"Q474272$7CBFA501-5ED1-49B5-BB96-965FDBF7F22E","rank":"normal"}],"P159":[{"mainsnak":{"snaktype":"value","property":"P159","hash":"c961a504c1ff211decd3311524ea20d76e801ce3","datavalue":{"value":{"text":"Multiagent reinforcement learning with regret matching for robot soccer","language":"en"},"type":"monolingualtext"},"datatype":"monolingualtext"},"type":"statement","id":"Q474272$5CC9D78B-EB4B-46CA-9101-2C55658C81CB","rank":"normal"}],"P225":[{"mainsnak":{"snaktype":"value","property":"P225","hash":"669a674e8bedd4e88b799c64b5316d2b16ac2acb","datavalue":{"value":"1299.68066","type":"string"},"datatype":"external-id"},"type":"statement","id":"Q474272$B0DE03D9-8FF2-483A-B604-EE88A7966D13","rank":"normal"}],"P16":[{"mainsnak":{"snaktype":"value","property":"P16","hash":"8fbedfb45dc4d0fa040d38a4d4128b072d8791ed","datavalue":{"value":{"entity-type":"item","numeric-id":474271,"id":"Q474271"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","id":"Q474272$2D418A8D-04CC-4F5C-9D69-FCEF594ABB4D","rank":"normal"},{"mainsnak":{"snaktype":"value","property":"P16","hash":"fe196296ee37715e8525a48649bc043809925323","datavalue":{"value":{"entity-type":"item","numeric-id":416634,"id":"Q416634"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","id":"Q474272$52F62E66-2092-4F9D-86CD-78D2BBA80DB6","rank":"normal"},{"mainsnak":{"snaktype":"value","property":"P16","hash":"adf0e776d1abbaf00d3c0d4b99c9880bba28dd16","datavalue":{"value":{"entity-type":"item","numeric-id":321709,"id":"Q321709"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","id":"Q474272$6F1DABB3-C95F-4BAB-A780-EAD14789D697","rank":"normal"}],"P200":[{"mainsnak":{"snaktype":"value","property":"P200","hash":"3dc97bc0aff607b9c22ce37ffa18b6de85001d90","datavalue":{"value":{"entity-type":"item","numeric-id":86199,"id":"Q86199"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","id":"Q474272$D2339952-2ED9-42EE-99A9-10667D693C89","rank":"normal"}],"P28":[{"mainsnak":{"snaktype":"value","property":"P28","hash":"baf80493586065a1490747a28bc7db754b9ba183","datavalue":{"value":{"time":"+2014-11-24T00:00:00Z","timezone":0,"before":0,"after":0,"precision":11,"calendarmodel":"http://www.wikidata.org/entity/Q1985727"},"type":"time"},"datatype":"time"},"type":"statement","id":"Q474272$8C783539-3BAD-4FC1-89C7-B30FB242E885","rank":"normal"}],"P1448":[{"mainsnak":{"snaktype":"value","property":"P1448","hash":"8e0503de58a8f0767502fb07da259426ea941b52","datavalue":{"value":"Summary: This paper proposes a novel multiagent reinforcement learning (MARL) algorithm Nash-\\(Q\\) learning with regret matching, in which regret matching is used to speed up the well-known MARL algorithm Nash-\\(Q\\) learning. It is critical that choosing a suitable strategy for action selection to harmonize the relation between exploration and exploitation to enhance the ability of online learning for Nash-\\(Q\\) learning. In Markov Game the joint action of agents adopting regret matching algorithm can converge to a group of points of no-regret that can be viewed as coarse correlated equilibrium which includes Nash equilibrium in essence. It is can be inferred that regret matching can guide exploration of the state-action space so that the rate of convergence of Nash-\\(Q\\) learning algorithm can be increased. Simulation results on robot soccer validate that compared to original Nash-\\(Q\\) learning algorithm, the use of regret matching during the learning phase of Nash-\\(Q\\) learning has excellent ability of online learning and results in significant performance in terms of scores, average reward and policy convergence.","type":"string"},"datatype":"string"},"type":"statement","id":"Q474272$12C8581C-A61E-44C6-9867-C013FA984232","rank":"normal"}],"P226":[{"mainsnak":{"snaktype":"value","property":"P226","hash":"cfe779e91fe9c53ee133568259955801965765ae","datavalue":{"value":"68T05","type":"string"},"datatype":"external-id"},"type":"statement","id":"Q474272$F4134750-80F0-4723-9CAA-A66521CC7D96","rank":"normal"},{"mainsnak":{"snaktype":"value","property":"P226","hash":"1eb0154c424a004351cf55226db6d71466331447","datavalue":{"value":"91A26","type":"string"},"datatype":"external-id"},"type":"statement","id":"Q474272$8F5E99BD-0D8C-47E5-9999-3C354EBA6E20","rank":"normal"},{"mainsnak":{"snaktype":"value","property":"P226","hash":"324c182fa6009b8a0931eb0d449b845cbbb3125a","datavalue":{"value":"94A13","type":"string"},"datatype":"external-id"},"type":"statement","id":"Q474272$ED132E71-4893-402D-BEBB-CE3779F9385B","rank":"normal"}],"P1451":[{"mainsnak":{"snaktype":"value","property":"P1451","hash":"bb145d9ad6c857e9a1856da559315e77bbed038c","datavalue":{"value":"6372722","type":"string"},"datatype":"external-id"},"type":"statement","id":"Q474272$1A0C75BB-67AB-4BF8-8354-AD860648AEED","rank":"normal"}],"P1460":[{"mainsnak":{"snaktype":"value","property":"P1460","hash":"57f7fea50d2ce1b39b695c4a1313582eed405e38","datavalue":{"value":{"entity-type":"item","numeric-id":5976449,"id":"Q5976449"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","id":"Q474272$46B981F1-9524-4FC0-A3B4-C3884D77989A","rank":"normal"}],"P1643":[{"mainsnak":{"snaktype":"value","property":"P1643","hash":"26433e286790b7096c8c2e3e4635deb6c86027a1","datavalue":{"value":{"entity-type":"item","numeric-id":3500474,"id":"Q3500474"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","qualifiers":{"P1659":[{"snaktype":"value","property":"P1659","hash":"d364b79af233268f52172d65373a14b6b6251a81","datavalue":{"value":{"amount":"+0.7567797303199768","unit":"1"},"type":"quantity"},"datatype":"quantity"}],"P1660":[{"snaktype":"value","property":"P1660","hash":"a327a09ea0305e98d5cf33bd4036320e19f2aed0","datavalue":{"value":{"entity-type":"item","numeric-id":6821328,"id":"Q6821328"},"type":"wikibase-entityid"},"datatype":"wikibase-item"}]},"qualifiers-order":["P1659","P1660"],"id":"Q474272$DE73FF13-F6CC-4293-8979-A27745A7021D","rank":"normal"},{"mainsnak":{"snaktype":"value","property":"P1643","hash":"c728a9f390f03605a0d7380028b7e175f0d581ba","datavalue":{"value":{"entity-type":"item","numeric-id":4825999,"id":"Q4825999"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","qualifiers":{"P1659":[{"snaktype":"value","property":"P1659","hash":"5bd1de9e1d329c6fbe970a3f79efc6693f987b0d","datavalue":{"value":{"amount":"+0.7463971972465515","unit":"1"},"type":"quantity"},"datatype":"quantity"}],"P1660":[{"snaktype":"value","property":"P1660","hash":"a327a09ea0305e98d5cf33bd4036320e19f2aed0","datavalue":{"value":{"entity-type":"item","numeric-id":6821328,"id":"Q6821328"},"type":"wikibase-entityid"},"datatype":"wikibase-item"}]},"qualifiers-order":["P1659","P1660"],"id":"Q474272$EB3721B7-FB1D-4042-9057-6099321BAE91","rank":"normal"},{"mainsnak":{"snaktype":"value","property":"P1643","hash":"bbda0385e845feff969a5b0ed8b66daf801b1ab8","datavalue":{"value":{"entity-type":"item","numeric-id":3406320,"id":"Q3406320"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","qualifiers":{"P1659":[{"snaktype":"value","property":"P1659","hash":"b5ca1e621c614c085159ceabed63ed13d8c5863a","datavalue":{"value":{"amount":"+0.7408413887023926","unit":"1"},"type":"quantity"},"datatype":"quantity"}],"P1660":[{"snaktype":"value","property":"P1660","hash":"a327a09ea0305e98d5cf33bd4036320e19f2aed0","datavalue":{"value":{"entity-type":"item","numeric-id":6821328,"id":"Q6821328"},"type":"wikibase-entityid"},"datatype":"wikibase-item"}]},"qualifiers-order":["P1659","P1660"],"id":"Q474272$988D1CC9-B443-4548-837B-49DD78DF2FAF","rank":"normal"},{"mainsnak":{"snaktype":"value","property":"P1643","hash":"35b0fc7015e8035c72a873a5341c40a14ce897de","datavalue":{"value":{"entity-type":"item","numeric-id":3096210,"id":"Q3096210"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","qualifiers":{"P1659":[{"snaktype":"value","property":"P1659","hash":"09d9ead7c31c1a1f2154f104db6d3f9169314018","datavalue":{"value":{"amount":"+0.7346270084381104","unit":"1"},"type":"quantity"},"datatype":"quantity"}],"P1660":[{"snaktype":"value","property":"P1660","hash":"a327a09ea0305e98d5cf33bd4036320e19f2aed0","datavalue":{"value":{"entity-type":"item","numeric-id":6821328,"id":"Q6821328"},"type":"wikibase-entityid"},"datatype":"wikibase-item"}]},"qualifiers-order":["P1659","P1660"],"id":"Q474272$98486644-60E5-4FBE-92F6-5E2E4626B3AE","rank":"normal"},{"mainsnak":{"snaktype":"value","property":"P1643","hash":"d701bc6fbd1480d33638f4564a7aa091a34731ef","datavalue":{"value":{"entity-type":"item","numeric-id":1028930,"id":"Q1028930"},"type":"wikibase-entityid"},"datatype":"wikibase-item"},"type":"statement","qualifiers":{"P1659":[{"snaktype":"value","property":"P1659","hash":"69cffe5f6a054092de25f1e6cc9a4f7e0e827ec7","datavalue":{"value":{"amount":"+0.7228768467903137","unit":"1"},"type":"quantity"},"datatype":"quantity"}],"P1660":[{"snaktype":"value","property":"P1660","hash":"a327a09ea0305e98d5cf33bd4036320e19f2aed0","datavalue":{"value":{"entity-type":"item","numeric-id":6821328,"id":"Q6821328"},"type":"wikibase-entityid"},"datatype":"wikibase-item"}]},"qualifiers-order":["P1659","P1660"],"id":"Q474272$364D8A64-F15A-4D15-88E5-98E1316059FC","rank":"normal"}]},"sitelinks":{"mardi":{"site":"mardi","title":"Multiagent reinforcement learning with regret matching for robot soccer","badges":[],"url":"https://portal.mardi4nfdi.de/wiki/Multiagent_reinforcement_learning_with_regret_matching_for_robot_soccer"}}}}}